id
stringlengths
40
40
text
stringlengths
9
86.7k
metadata
stringlengths
3k
16.2k
source
stringclasses
1 value
added
stringdate
2024-11-21 00:00:00
2024-12-12 00:00:00
created
stringdate
2024-11-21 00:00:00
2024-12-12 00:00:00
a80eb83683af2e87d0af8704458ac0da1a277c79
Platform and app histories: Assessing source availability in web archives and app repositories Helmond, A.; van der Vlist, Fernando Published in: The Past Web: Exploring Web Archives Citation for published version (APA): General rights It is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), other than for strictly personal, individual use, unless the work is under an open content license (like Creative Commons). Disclaimer/Complaints regulations If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: https://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible. UvA-DARE is a service provided by the library of the University of Amsterdam (http://dare.uva.nl) Download date: 18 Oct 2020 Platform and app histories: Assessing source availability in web archives and app repositories Anne Helmond (University of Amsterdam) Fernando van der Vlist (Utrecht University/University of Siegen) Abstract In this chapter, we discuss the research opportunities for historical studies of apps and platforms by focusing on their distinctive characteristics and material traces. We demonstrate the value and explore the utility and breadth of web archives and software repositories for building corpora of archived platform and app sources. Platforms and apps notoriously resist archiving due to their ephemerality and continuous updates. As a consequence, their histories are being overwritten with each update, rather than written and preserved. We present a method to assess the availability of archived web sources for social media platforms and apps across the leading web archives and app repositories. Additionally, we conduct a comparative source set availability analysis to establish how, and how well, various source sets are represented across web archives. Our preliminary results indicate that despite the challenges of social media and app archiving, many material traces of platforms and apps are in fact well preserved. We understand these contextual materials as important primary sources through which digital objects such as platforms and apps co-author their own ‘biographies’ with web archives and software repositories. Keywords platforms, apps, web historiography, web archiving, app archiving Introduction Contemporary digital objects, such as digital platforms and mobile apps, pose significant challenges to archiving and research practice. With millions or even billions of monthly active users, some of those platforms and apps are among the most popular products and services around the world (Statista, 2017; Statista, 2019). Yet, despite their social, economic, and cultural significance, many of their histories are at risk of getting lost. As a result of rapid release cycles that enable developers to develop and deploy their code very quickly, large web platforms such as Facebook and YouTube change continuously, overwriting their material presence with each new deployment. Similarly, the pace of mobile app development and deployment is only growing, with each new software update overwriting the previous version. In this chapter, we consider how one might write the histories of these new digital objects, despite such challenges. We reflect on the materiality of platforms and apps as specific types of digital objects and outline a method to take inventory of their archived materials for historical studies. As we argue, these archived sources offer various opportunities for historical studies of platforms and apps. That is, the routine overwriting of digital objects and their data through continuous incremental software updates constitutes both a core problem as well as a source of research opportunities for historians – at least, as long as those changes are documented by these digital objects themselves or preserved by web archives. We, therefore, look into the source availability of preserved material traces of platforms and apps. In the first section, we consider how, from a material perspective, platforms and apps are different from other digital objects such as websites. As a consequence, there are challenges with regard to their archiving and study as well as new opportunities. In the second section, we describe a method of taking inventory of the available materials for writing platform and app histories. The method is not just useful for building corpora of historical platform or app sources but also potentially valuable for determining significant omissions in web archives and for guiding future archiving practices. In the third section, we describe the outcomes of an exploratory case study of the availability of leading platforms and apps today. We conclude with a reflection on the future of platform and app historiography. The archived materiality of platforms and apps The early web mainly consisted of websites and interlinked web pages. As a consequence, the website has become the main unit of archiving as well as the main unit of historical analysis (Brügger, 2018). However, in the past decade, we have witnessed the emergence of new types of digital objects, in particular, digital platforms and apps for social media and beyond. But what characterises these specific digital objects as archived objects, as compared to the website or web page? When thinking of how platforms and apps are archived today, we contend that we need to consider their specific materiality. With the term materiality, we refer to the material form of those digital objects themselves as well as the material circumstances of those objects that leave material traces behind, including developer resources and reference documentation, business tools and product pages, and help and support pages (Ankerson, 2012; Fuller, 2008; Gillespie, 2003; Kirschenbaum, 2003). Furthermore, developers commonly keep changelogs, release notes, and do versioning. Importantly, rather than secondary sources, which are commonly used for web histories of platforms and apps (Brügger, 2015; Poulsen, 2018), these materials are primary sources that offer particular research opportunities or that may be supplemented and triangulated for accuracy. These material traces may ‘tell stories’ about the evolving production, preferred usage and embedded politics of software objects (Gillespie, 2003). We understand these contextual materials as important primary sources through which digital objects such as platforms and apps write, or indeed overwrite, their own ‘biographies’, thus building on the emerging genre of media biography, including ‘software biography’, ‘website biography’, and ‘platform biography’ (Burgess & Baym, 2020; Natale 2016; Rogers, 2017; Pollock & Williams, 2008). The dual materiality of platforms and apps, as software objects and as sets of material contextual traces, opens up a productive avenue for historical analysis. Even when a platform or app as such is not archived, we may turn to web archives to look for their contextual material traces instead. These traces ‘provide a potential entryway to the web cultures, production practices, and symbolic systems informing lost cultural artifacts’ (Ankerson, 2012: 392). Furthermore, these ‘textual supplements are perhaps even more potent because they seem to be part of the tool itself” as they document a ‘self-interpretation’ of the software object that we may employ for its history writing (Gillespie, 2003). Web archives The materiality of a web platform manifests as a collection of interrelated web pages that are meaningfully arranged to address different groups of users on different ‘sides’. That is, platforms are programmable infrastructures as well as digital intermediaries that bring together different groups of users (Gillespie, 2010; Helmond, 2015; de Reuver, Sørensen, & Basole, 2018). For each user group, there are different sets of resources and documentation that describe the operational logics, stakeholder relations, and preferred uses of a platform. For example, social media platforms provide such materials for their various user groups, which include end-users, developers, businesses, advertisers, partners, creators, media and publishers, politicians, investors, and researchers. As we have outlined previously, these different sets of materials are well archived and afford and privilege different types of social media and platform history (Helmond & van der Vlist, 2019). To locate historical platform resources and documentation, we may turn towards web archives. The materiality of apps is different from platforms. While many digital platforms exist principally on the web and operate tools, products, and services on multiple ‘sides’ to different groups of users, apps are software bundles (or packages) that are downloaded directly on to mobile devices from app stores. In contrast to websites and web platforms, mobile apps are not web ‘native’ and instead reside on mobile devices and in app stores, which makes them even more difficult to archive and study. Yet they are entangled with a variety of other web services (Dieter et al., 2019). App stores, arguably, are a ‘native’ environment for apps. For end-users, apps present themselves as contained digital objects that are purchased and downloaded from platform-specific app stores, such as Google Play for Android or the App Store for the iOS operating system. Yet by their design, app stores only provide access to the latest version of an app bundle and not to former versions. With each new software update, a former app version is overwritten – both inside the app store and on the user’s mobile device. As a result, neither app stores nor mobile devices keep former versions of apps, which poses challenges for historical app studies. App repositories To locate former app bundle versions, we may turn to several third-party software repositories, such as Cydia for iOS apps or APKMirror for Android apps. Contrary to traditional institutional archives, these repositories are non-institutional storage locations for the retrieval of software that were never designed for permanent preservation (Allix, Bissyandé, Klein, & Le Traon, 2016). While they may share commonalities with archives, software repositories do not curate collections of ‘records’ for permanent historical preservation and do not necessarily consider their value as evidence or as a source for historical research (Brügger, 2018). Additionally, the use of software repositories as app archives raises issues with regard to archive incompleteness and software insecurity. They are incomplete because they rely on users manually uploading app versions; they pose security risks because not all repositories scan package uploads for malicious code injections. When app code is tampered with, this may directly limit or influence historical code-based analyses. And even if we find former app versions in repositories, we still face software emulation challenges with apps as they typically require a complex set of dependencies and will only ‘run’ or operate on specific devices and operating systems of the past (Boss & Broussard, 2017; Helmond & van der Vlist, 2019; Stevenson & Gehl, 2018). As an alternative or additional strategy, app historians may turn to archived app metadata sources as preserved in web archives that hold ‘snapshots’ of app details pages in app stores or repositories. While apps and app stores both exist primarily on mobile devices, the leading app stores – Google Play and Apple’s App Store – also provide web-based graphical user interfaces to their stores. These stores contain a wealth of information about specific apps as well as their relations to other, ‘Similar’ apps, and the store categories or app collections to which they belong (Dieter et al., 2019). For each app, there is a details page with the app’s title, developer, bundle version, screenshots, description, requested app permissions, download statistics, reviews, ratings, and more. Fortunately, these app store details pages are preserved in web archives, which generates opportunities for historical app studies. In short, to locate historical app materials, we may thus either turn to app repositories to retrieve former app versions or to web archives to retrieve contextual information. Assessing the availability of platform and app sources To determine whether these materials have been preserved, and where they are located, we conducted an exploratory study of the availability of archived sources for platform and app history. Building on previous work (Helmond & van der Vlist, 2019), we first detail a method for assessing the availability of archived web sources for platforms and apps in web archives and app repositories. Making use of market data portals Statista and AppAnnie, we selected the current top-20 most popular social media platforms and top-10 mobile apps for Android and iOS combined, both based on the current number of active users worldwide (App Annie, 2019; Statista, 2019). For the first source set of social media platforms, we made an inventory of their most prominent ‘sides’ and created a list of URLs pointing to the location of their principal materials (for example twitter.com, developer.twitter.com, business.twitter.com, marketing.twitter.com, investor.twitterinc.com). For the second source set of mobile apps, we created a list of URLs pointing to the app store details pages for each app. These URLs contain the unique bundle identifier of each app, which remains stable even when apps are continuously updated and overwritten. App store links are constructed with these bundle identifiers and thus also remain stable over time. So, although apps are updated continuously, they have a stable bundle identifier and a stable web URL that points to a details page that we may track in archives over time. In addition, we used these unique bundle identifiers to locate these apps in ten prominent third-party software repositories for Android apps. To assess which web archives actually hold archival records of a particular resource, we employed Memento’s Time Travel Service (Van de Sompel et al., 2009). The service functions as a search engine ‘on top of’ the 25 leading international web archives, and may be queried for specific URLs (Memento, 2016). For end-users, it offers a graphical user interface (GUI) that may be deployed to manually query and locate a URL across multiple web archives. Additionally, it offers an application programming interface (API) to programmatically request that data. Both methods return a list of web archives that hold one or more Mementos (i.e., time-stamped archived copies of a specific URL). For each Memento, the service returns the first and last Memento available as well as links to all available captures across archives. Time Travel thus provides a simple method to assess the availability of specific archived sources across web archives. To determine the total number of Mementos held or the number of archives holding them, users may follow the ‘All captures from’ link for each web archive and manually count the number of Mementos held. To scale and automate this process for a large source set of URLs, researchers may use MemGator, an open-source command-line interface utility that is built ‘on top of’ the Memento API and aggregates Mementos. MemGator programmatically requests Memento TimeMaps from a list of web archives that support the Memento protocol (Alam & Nelson, 2016). Each TimeMap provides a time-stamped list of all Mementos held in that archive for a given URL (Memento, 2015). It also lets researchers customise the list of web archives from which to request TimeMaps. For present purposes, we extended MemGator’s list of web archives that natively support the Memento protocol, as specified in ‘archives.json’, with a number of web archives listed in the Time Travel Archive Registry that run Memento proxies (Memento, 2015), so as to be as inclusive as possible in our exploratory study. Our custom list included 20 web archives from which to programmatically retrieve data. More specifically, we used MemGator to programmatically retrieve the available platform and app materials from across these 20 web archives and then analysed the results to assess the availability of sources. In what follows, we describe the results of our exploratory study. **The availability of platform and app sources** We analysed the source availability of platform and app materials according to three criteria: first, the volume of availability or the total number of Mementos held; second, the depth of availability, specified as the number of days, months, or years between the first and last Mementos; and third, the breadth of availability, referring to the number of web archives holding those Mementos (Helmond & van der Vlist, 2019). The first two criteria determine the amount of available material and the possible levels of granularity for historical analysis, while the third criterion enables researchers to triangulate and verify historical sources, such as when certain elements are corrupted or missing. In Tables 1–2, we provide a summary of our exploratory study results. For both of our source sets, we counted the total number of Mementos held across web archives (i.e., volume), counted the number of web archives holding those Mementos (i.e., breadth, expressed as a single number up to 20 web archives), and determined the time span between the first and last Mementos held (i.e., depth, expressed in number of days). Taken together, these three dimensions provide a useful account of source availability and allow researchers to determine the feasibility of certain historical projects or allow archiving practitioners to reconsider their archiving strategy. Based on these counts, we then calculated an availability rank for each platform and app by calculating the number of captures per day (volume divided by depth) and then multiplying that number by breadth. The outcome values have been ranked in ascending order. Social media platforms in web archives As we have analysed elsewhere, social media platforms have been relatively well archived on all of their ‘sides’ (Helmond & van der Vlist, 2019). The five best-archived social media platforms represent an average of 913,440 Mementos, followed by an average of 130,036 for the next 15 platforms (Max = 1,783,855; Min = 3,007; Median = 166,412). As these results suggest, there are many opportunities for historical platform studies about different ‘sides’ and user groups, albeit at different levels of granularity, depending on source availability. In particular, developer and business materials have been well archived and enable researchers to write histories beyond the ‘front-end’ interface for end-users. They may look at platforms’ influential roles as development platforms, advertising platforms, content creation platforms, media publishers, and platform companies (Helmond & van der Vlist, 2019; Helmond, Nieborg, & van der Vlist, 2019; Nieborg & Helmond, 2019). These materials also enable researchers to examine how the technological architectures and economic business models of platforms evolve side-by-side. In short, platform histories would benefit from considering more than just their end-users and contents and include their multiple user groups to examine how they coevolved with respect to other ‘sides’. App details in web archives Contrary to most popular social media platforms, apps have been less well archived in general, at least when we look at the preservation of their app store details pages in web archives (Table 1). For Android apps, Facebook Messenger is the best-archived app by far, leaving all other apps behind. In fact, other apps have hardly been archived at all. While the four best-archived top Android apps – Facebook Messenger, Instagram, Facebook, and WhatsApp Messenger – represent an average of 27,681 Mementos each, the next six top apps have an average of just 98.6 Mementos (Max = 85,222; Min = 24; Median = 240). For top iOS apps, Facebook Messenger accounts for nearly 99,581 Mementos while the next 9 top apps have an average of just 177.4 Mementos (Max = 99,581; Min = 0; Median = 85). In particular, pages of non-Western apps have been poorly archived, in line with a previously-identified imbalance of source availability in archived websites between the United States and other countries (Thelwall & Vaughan, 2004). The archived app materials enable researchers to examine the evolution of individual apps, or app collections and genres. In a previous project, we examined the emergence of secure or encrypted messaging and chat apps on Android and used their descriptions to determine how those apps offered new and different ways of ‘doing privacy’ (for example the emergence of new encryption protocols, and tradeoffs between security, privacy, and usability). Tracking app descriptions over time thus enabled us to understand how apps or app developers responded to Edward Snowden’s surveillance revelations in June 2013, when digital surveillance became a ‘matter of concern’ on the web and mobile ecosystem (Dieter et al., 2019; van der Vlist, 2017). App details pages enable app historians to tell stories about an app’s rhetorical positioning (for example using taglines, descriptions), production (for example using developer names, app versions, changelogs), distribution (for example using app collections, relations, pricing models), and reception (for example using app downloads, reviews, ratings). Table 1 Availability of archived web sources for top 10 Android and iOS apps across web archives (accumulated). <table> <thead> <tr> <th>app title</th> <th>Android (Google Play)</th> <th>iOS (App Store)</th> </tr> </thead> <tbody> <tr> <td></td> <td>volume</td> <td>depth</td> </tr> <tr> <td>Facebook</td> <td>8,198</td> <td>2,637</td> </tr> <tr> <td>WhatsApp Messenger</td> <td>4,092</td> <td>2,600</td> </tr> <tr> <td>Facebook Messenger</td> <td>85,222</td> <td>2,638</td> </tr> <tr> <td>WeChat</td> <td>442</td> <td>2,557</td> </tr> <tr> <td>Instagram</td> <td>13,215</td> <td>2,611</td> </tr> <tr> <td>QQ</td> <td>38</td> <td>2,551</td> </tr> <tr> <td>Alipay</td> <td>26</td> <td>2,188</td> </tr> <tr> <td>Taobao</td> <td>31</td> <td>2,147</td> </tr> <tr> <td>WiFi Master Key</td> <td>31</td> <td>1,890</td> </tr> <tr> <td>Baidu</td> <td>24</td> <td>2,196</td> </tr> </tbody> </table> **App bundles in app repositories** With regard to the preservation of Android app bundles in third-party software repositories, we found more promising results (Table 2). All of the 10 top apps in our set are relatively well archived based on all three criteria. In terms of volume, the four Facebook-owned top apps – WhatsApp Messenger, Facebook, Instagram, Facebook Messenger – have been stored an average of 3,722 times while the next six – all non-Western – top apps have been stored 297 times on average (Max = 4,585; Min = 166; Median = 469). The oldest versions of the apps in our dataset date back to May 2012. These results suggest that app repositories are promising sources for historical app studies, both to study app bundles themselves and to triangulate app details between app repositories and official app stores. Most importantly, these primary app materials enable researchers to devise historical methods based on ‘static’ app analysis (Dieter et al., 2019). That is, app bundles may be decompiled and analysed as source code to study requested app permissions, embedded code, and external relationships to other infrastructural web services such as advertising and content delivery networks, for example (Gerlitz, Helmond, Nieborg, van der Vlist, 2019). Or, researchers may emulate those app bundles to conduct ‘dynamic’ app analysis and study evolving interface design patterns and the network connections that mobile devices establish on behalf of apps. Table 2 Availability of top-10 Android apps across app repositories (accumulated). <table> <thead> <tr> <th>app title</th> <th>Android</th> <th>volume</th> <th>depth</th> <th>breadth</th> <th>rank</th> </tr> </thead> <tbody> <tr> <td>Facebook</td> <td></td> <td>4,585</td> <td>2,584</td> <td>9</td> <td>2</td> </tr> <tr> <td>WhatsApp Messenger</td> <td></td> <td>4,268</td> <td>2,585</td> <td>10</td> <td>1</td> </tr> <tr> <td>Facebook Messenger</td> <td></td> <td>2,765</td> <td>2,609</td> <td>10</td> <td>4</td> </tr> <tr> <td>WeChat</td> <td></td> <td>315</td> <td>2,364</td> <td>10</td> <td>6</td> </tr> <tr> <td>Instagram</td> <td></td> <td>3,271</td> <td>2,600</td> <td>10</td> <td>3</td> </tr> <tr> <td>QQ</td> <td></td> <td>229</td> <td>2,187</td> <td>9</td> <td>9</td> </tr> <tr> <td>Alipay</td> <td></td> <td>193</td> <td>1,362</td> <td>8</td> <td>7</td> </tr> <tr> <td>Taobao</td> <td></td> <td>258</td> <td>1,844</td> <td>7</td> <td>8</td> </tr> <tr> <td>WiFi Master Key</td> <td></td> <td>623</td> <td>1,401</td> <td>8</td> <td>5</td> </tr> <tr> <td>Baidu</td> <td></td> <td>166</td> <td>2,242</td> <td>5</td> <td>10</td> </tr> </tbody> </table> Conclusion: Platform and app historiography In this chapter, we have demonstrated how researchers may use web archives and app repositories to write histories of new digital objects such as platforms and apps, despite their archiving challenges. We have reflected on the materiality of platforms and apps as specific types of digital objects and have outlined a method to make an inventory of their archived materials. Existing archived sources offer many opportunities for historical platform and app studies and it is our hope that their affordances for research are further explored. Our exploratory study of source availability for the most popular social media platforms and mobile apps provides important insights into the current state of platform and app archiving, which should be of interest to researchers and historians of web platforms and mobile apps. Furthermore, our assessment of source availability provides relevant starting points and example case studies for different types of platform and app history and may guide future historians in the process of corpus building. Our exploratory study should also be of interest to web and app archiving practitioners. In particular, our source availability assessment method and the preliminary results of our exploratory study may guide or inspire a reconsideration of archiving efforts going forward. Current web archiving strategies or protocols may not capture all of the relevant materials, as in the case of app store details pages which are located deep within app stores. We particularly recommend a more comprehensive archiving strategy that captures the multiple ‘sides’ of popular social media platforms and the app details pages of popular app stores beyond the top apps. Although we only looked at a small selection of top platforms and apps, we already observed large discrepancies in source availability between both types of digital objects, which inevitably determines and limits the future histories that may be written about and with those apps. Our selection of popular apps is expected to be far better archived than the millions of apps in the ‘long tail’ of app stores. We should note, however, that even with a hundred or fewer Mementos it is, of course, possible to write the histories of platforms and apps. Depending on the historical project, differences in source availability may have implications with regards to volume (for example limiting the afforded level of granularity or resolution), depth (for example constraining the historical period), and breadth of availability (for example limiting the possibilities of triangulation or source verification). Existing services and utilities such as Memento and MemGator offer the opportunity to move beyond the Internet Archive as the primary, or even only source of web history. They also enable researchers to triangulate and verify sources and thereby address common issues of archive incompleteness and software insecurity (including corrupt app files). The ephemerality of digital platforms and mobile apps may be understood as the result of a continuous stream of incremental software updates that overwrite the material presence of a platform or app every time. We may conceive of this process of overwriting as a challenge of material erasure, or as a ‘native’ mode of software history-writing. That is, even though these ephemeral digital objects change continuously, web archives and software repositories, fortunately, capture many of those changes, thereby arresting the ongoing material transformation of platforms and apps at certain time intervals (for example with hourly, daily, or monthly captures or ‘snapshots’). Consequently, we argue that the biographies of platforms and apps are co-written by these digital objects themselves and by web archives, and in the case of apps, also by software repositories. We can employ their different types of primary and contextual sources to ‘reconstruct’ these processes of overwriting at different levels of granularity – from the minute, incremental changes to the longer-term evolution of a platform or app. We can use web archives and repositories to reconstruct what was written on top of other writing, and narrate the drama of changes, updates, and versions. **Funding** This work is part of the research programme Innovational Research Incentives Scheme Veni with project number 275-45-009, which is (partly) financed by the Dutch Research Council (NWO); and the German Research Foundation (DFG) under Grant DFG-SFB-1187. **Notes** 2. Over the past decade, app store URLs changed only once or twice: Google Play (since 2012) was formerly called Android Market (2008–2012), and the domain changed from android.com/market to play.google.com/store; Apple’s App Store (since 2008) was formerly called App Store (iTunes Preview) (2012–2019) and before that, Web Apps for iPhone (2008–2012), and its domains changed from apple.com/webapps to itunes.apple.com to apps.apple.com. For our exploratory study, we focused only on the current URLs at the time of writing. 4. We included the following app repositories: AndroidAPKsBox.com, AndroidAPKsFree.com, AndroidDrawer, APKMirror, APKMonk, APKPure, APKPure.ai, APKPure.co, Aptoide, and Uptodown. 8. All data were collected between May–June 2019. References
{"Source-Url": "https://pure.uva.nl/ws/files/50281810/PREPRINT_2021_Helmond_vanderVlist_PlatformAnd_AppHistories.pdf", "len_cl100k_base": 6628, "olmocr-version": "0.1.53", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 39655, "total-output-tokens": 10529, "length": "2e12", "weborganizer": {"__label__adult": 0.0009632110595703124, "__label__art_design": 0.0016679763793945312, "__label__crime_law": 0.0007390975952148438, "__label__education_jobs": 0.015777587890625, "__label__entertainment": 0.0008788108825683594, "__label__fashion_beauty": 0.00034809112548828125, "__label__finance_business": 0.002681732177734375, "__label__food_dining": 0.0006365776062011719, "__label__games": 0.0018339157104492188, "__label__hardware": 0.0027294158935546875, "__label__health": 0.0007033348083496094, "__label__history": 0.0118255615234375, "__label__home_hobbies": 0.00026297569274902344, "__label__industrial": 0.0003261566162109375, "__label__literature": 0.00803375244140625, "__label__politics": 0.0007491111755371094, "__label__religion": 0.0006742477416992188, "__label__science_tech": 0.1851806640625, "__label__social_life": 0.0007672309875488281, "__label__software": 0.29248046875, "__label__software_dev": 0.468994140625, "__label__sports_fitness": 0.0002899169921875, "__label__transportation": 0.0008287429809570312, "__label__travel": 0.0006165504455566406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38793, 0.0572]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38793, 0.30637]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38793, 0.86019]], "google_gemma-3-12b-it_contains_pii": [[0, 1368, false], [1368, 2888, null], [2888, 5379, null], [5379, 7860, null], [7860, 10238, null], [10238, 12744, null], [12744, 15123, null], [15123, 17578, null], [17578, 20006, null], [20006, 22267, null], [22267, 23869, null], [23869, 25759, null], [25759, 28374, null], [28374, 30604, null], [30604, 32299, null], [32299, 34918, null], [34918, 37391, null], [37391, 38793, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1368, true], [1368, 2888, null], [2888, 5379, null], [5379, 7860, null], [7860, 10238, null], [10238, 12744, null], [12744, 15123, null], [15123, 17578, null], [17578, 20006, null], [20006, 22267, null], [22267, 23869, null], [23869, 25759, null], [25759, 28374, null], [28374, 30604, null], [30604, 32299, null], [32299, 34918, null], [34918, 37391, null], [37391, 38793, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38793, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38793, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38793, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38793, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38793, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38793, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38793, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38793, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38793, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38793, null]], "pdf_page_numbers": [[0, 1368, 1], [1368, 2888, 2], [2888, 5379, 3], [5379, 7860, 4], [7860, 10238, 5], [10238, 12744, 6], [12744, 15123, 7], [15123, 17578, 8], [17578, 20006, 9], [20006, 22267, 10], [22267, 23869, 11], [23869, 25759, 12], [25759, 28374, 13], [28374, 30604, 14], [30604, 32299, 15], [32299, 34918, 16], [34918, 37391, 17], [37391, 38793, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38793, 0.15723]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
c6b22e42171998d4b23400725e7341a515a8f983
A LOW-COST MULTICOMPUTER FOR SOLVING THE RCPSP Grzegorz Pawiński¹, Krzysztof Sapiecha¹ ¹Department of Computer Science, Kielce University of Technology KEY WORDS: RCPSP, multicomputer, distributed processing model ABSTRACT: In the paper it is shown that time necessary to solve the NP-hard Resource-Constrained Project Scheduling Problem (RCPSP) could be considerably reduced using a low-cost multicomputer. We consider an extension of the problem when resources are only partially available and a deadline is given but the cost of the project should be minimized. In such a case finding an acceptable solution (optimal or even semi-optimal) is computationally very hard. To reduce this complexity a distributed processing model of a metaheuristic algorithm, previously adapted by us for working with human resources and the CCPM method, was developed. Then, a new implementation of the model on a low-cost multicomputer built from PCs connected through a local network was designed and compared with regular implementation of the model on a cluster. Furthermore, to examine communication costs, an implementation of the model on a single multi-core PC was tested, too. The comparative studies proved that the implementation is as efficient as on more expensive cluster. Moreover, it has balanced load and scales well. 1. INTRODUCTION Resource allocation, called the Resource-Constrained Project Scheduling Problem (RCPSP), attempts to reschedule project tasks efficiently using limited renewable resources minimising the maximal completion time of all activities [3 - 5]. A single project consists of \( m \) tasks which are precedence-related by finish-start relationships with zero time lags. The relationship means that all predecessors have to be finished before a task can be started. To be processed, each task requires a human resource (HR). The resources are limited to one unit and therefore have to perform different tasks sequentially. RCPSP is an NP-hard problem. In most cases, branch-and-bound is the only exact method which allows the generation of optimal solutions for scheduling rather small projects (usually containing less than 60 tasks and not highly constrained) within acceptable computational effort [1, 5]. Results of the Hartmann and Kolisch [8] investigation showed that the best performing heuristics were the GA of Hartmann [7] and the SA procedure of Bouleimen and Lecocq [2]. Their latest research revealed that the forward-backward improvement technique applied to X-pass methods, metaheuristics or other approaches produces good results and that the most popular metaheuristics were GAs and TS methods. In our previous works, cost-efficient project management based on a critical chain (CCPM) was investigated. The CCPM is one of the newest scheduling techniques [19]. It was used to solve a variant of the RCPSP. A goal of the management was to allocate resources in order to minimise the project total cost and complete it in a given time. A sequential metaheuristic from Deniziak [6] was adapted to take into account specific features of human resources participating in a project schedule. The research showed high efficiency of this adaptation for resource allocation [12]. An extension of the problem, where HRs are only partially available since they may be involved in many projects, was also investigated [14]. The research proved that the adaptation is efficient but the minimization was still time consuming and would require accelerating to cope with bigger real-life problems. Our latest research showed that the algorithm has got an inherent parallelism. Hence, a distributed processing model for solving the extension of the RCPSP was developed and tested on a regular PCs [13]. It gave a time of scheduling even 10 times smaller than the sequential processing. Therefore, in this research we present a new implementation of the model, on a low-cost multicomputer built from PCs connected through a local network. Furthermore, we compare it with regular implementation of the model on a cluster and show that it may be just as efficient, but not so expensive what might limit its practical value. The next section of the paper contains a brief overview of related work. Motivation for the research is given in section 3. An implementation of the distributed processing model for the algorithm is presented in section 4. Evaluation of the implementation in both distributed and parallel environments is given in section 5. The paper ends with conclusions. 2. RELATED WORK Researchers studied the problem and suggested their own solutions which can be divided into exact procedures and heuristics. Branch and bound methods are an example of the exact procedures (see e.g. [3], [4]). In [11] another method, a tree search algorithm, was presented. It is based on a new mathematical formulation that uses lower bounds and dominance criteria. An in-depth study of the performance of the latest RCPSP heuristics can be found in [10]. Heuristics described by the authors include X-pass methods, also known as priority rule-based heuristics, classical metaheuristics, such as Genetic Algorithms (GAs), Tabu search (TS), Simulated annealing (SA), and Ant Colony Optimisation (ACO). Non-standard metaheuristics and other methods were presented as well. The former consist of local search and population-based approaches, which have been proposed to solve the RCPSP. The authors investigated a heuristic which applies forward-backward and backward-forward improvement passes. For detailed description of the heuristic schedule generation schemes, priority rules, and representations refer to [8]. The effectiveness of scheduling methods can be further improved using parallel processing. Some implementations of parallel TS [15 17] and SA [18] algorithms for different combinatorial problems have already been proposed. The most common one is based on dividing (partitioning) the problem such that several partitions could be run in parallel and then merged. Parallelism in GAs can be achieved at the level of single individuals, the fitness functions or independent runs [21, 22]. All of the parallel approaches fall into three categories: the first uses a global model, the second uses a coarse-grained (island) model and the third uses a fine-grained (grid, cellular) model [20]. In the global model, a master process manages the whole population by assigning subsets of individuals to slave processes. In the island model a population is divided into sub-populations that are evolved separately. During evolution, some individuals are exchanged periodically between them. In the grid model a population is represented as a network of interconnected individuals where only neighbors may interact. It was observed that parallel GAs (PGAs) usually provide better efficiency than sequential ones [20]. The same parallel approaches can be applied for ACO. In [23] five strategies of parallel processing are described, which are mainly based on the well-known master/slave approach [24]. 3. MOTIVATION The sequential algorithms are time consuming, what considerably limits their usefulness. Speeding up the calculations would be desirable for project managers because it may allow managing complex projects in acceptable time. Parallel models offer the advantage of reducing the execution time and give an opportunity to solve new problems which have been unreachable in case of sequential models. The most popular parallel strategies are based on master/slave approach [24] with centralized management of distributing tasks and gathering results. The master can efficiently coordinate the system, avoiding potential conflicts before they take place, and react on failures of the slaves. However, global gathering and re-broadcasting of large configurations can be time-consuming. Costs of synchronization between slaves have to be considered, also. Some slaves may have to wait for completing other tasks, which is necessary to retain data integrity. More-over, the master is the weakest point of the system. The system will slow down if the master cannot handle incoming requests. If the master crashes, the whole system will also crash. Another problem is load imbalance caused by unpredictable processing time of each slave. Summarizing, the gain coming from parallelization of the algorithm may be significantly reduced. From our research it also follows that parallel processing could reduce efficiently the amount of the time consumed by the metaheuristic algorithm [13]. Usually, such reduction requires a use of a cluster and hence is expensive what may limit its popularity. The key idea to overcome this inconvenience is to make use of multi-core architecture of low-cost PCs, instead of the cluster. Such a multi-multi computer is cheap, easily assembled and might be very useful for practical reasons. However, it should be proven that the implementation is as efficient as on the cluster, and that it has balanced load and scales well. 4. OPTIMIZATION ALGORITHM The metaheuristic algorithm starts with the initial point and searches for the cheapest solution satisfying given time constraints. The initial schedule is generated by greedy procedures that try to find a resource for each task basing upon to the smallest increase of the project duration or the project total cost. It is a suboptimal solution which the algorithm tries to enhance. In each pass of the iterative process, the current project schedule is being modified in order to get closer to the optimum. In the first add stage a new HR which is not in the schedule is attached to it. Tasks of HRs which have already been engaged in the schedule are moved to the HR but only when a positive gain is achieved. Afterwards, if there are HRs without allocated tasks, they are removed from the schedule. The best schedule goes to the next stage and the proceeding is repeated until no more free HRs are available. In the second rem stage all tasks allocated to the HR are moved onto other HRs, still remaining in the schedule, but only when a positive gain is achieved. Then again, HRs without allocated tasks are removed from the schedule. Finally, the best project schedule coming from all stages is chosen. The iterative process is repeated for every resource from the resource library until no improvement can be found. At the very end, project tasks may be shifted right to the latest feasible position into their forward free slack by means of As Late As Possible (ALAP) schedule. 4.1. Distributed processing model The distributed processing model is shown in Figure 1. ![Figure 1 Distributed processing model](image) In general, there are \( R \cdot (1 + R_r) \) schedule modifications that have to be calculated, where \( R \) is the number of HRs and \( R_r \) is the number of HRs that have left after particular add stage. However, not all of them can be performed at the same time. At the beginning, only \( R \) attempts to add a new HR to the schedule may be calculated. Each of the add stages could be performed simultaneously. Afterwards, if any of them is finished, \( R_r \) attempts in the rem stage may be started. The attempts to move all tasks from each of HRs may also be calculated separately. Thus, the maximal number of simultaneous modifications is \( R \cdot R_r \), when all the add stages finish at the same time. The process iteration ends after finishing all of the second stages. 4.2. Implementation of the model The distributed processing model (Figure 2) was implemented in Java. One application, which is a tasks dispatcher (D), manages a pool of threads responsible for communication with other worker applications, located on remote computers. At the beginning, workers notify the dispatcher about their readiness to execute tasks. The tasks dispatcher creates a new thread for each worker and joins it to the pool. The pool contains as many threads as needed, but will reuse previously constructed threads when they are available. On the remote computers, workers run as independent processes, what makes them available for direct communication. Therefore, the tasks dispatcher may uniformly split the computational tasks, so as to workload could easily be balanced. Each remote computer runs as many processes as the number of processor cores, in order to use the whole computing power of multi-core machines. During executing an iteration of the algorithm, the tasks dispatcher sends schedule modification requests to the first free worker. To this end, it uses Remote Method Invocation (RMI) for communication. If a worker is not responding, it will be removed from the pool and the request will be sent to another free worker. Workers receive project data and the searching parameters so as to invoke a method, in order to perform the add or the rem stage. Afterwards, results of modifications are sent back to the dispatcher and then the thread can be reused. Synchronization occurs at the end of each of the iterations because all the rem stages have to be finished in order to choose the best schedule. 5. COMPARATIVE STUDIES The efficiency of the algorithm described in the paper was estimated on 100 randomly generated project plans containing from 30 to 60 tasks, and from 8 to 16 HRs with random data. Each project plan was scheduled several times and results were averaged. Tasks in the project plan may have at most 4 precedence relationships with probability 0.35. They can be easily scheduled because they have few predecessors or none. If the probability of inserting the precedence relationships were lower, the project plan would contain mostly unconnected tasks. On the other hand, tasks with two or more predecessors significantly decrease the search space. In each project, resource availability was reduced by allocating 30 tasks from PSPLIB, developed by Kolisch and Sprecher [9]. The set with 30 non-dummy activities currently is the hardest standard set of RCPSP-instances for which all optimal solutions are known [4]. However, we considered an extension of RCPSP where resources have already got their own schedule and a cost of the project, but not the project duration, should be minimized. So even though we take the project instances from PSPLIB, the results cannot be compared. The initial schedule was generated by two greedy procedures mentioned at the beginning of section 4. Implementation of the distributed model was run on two distributed systems: - multicomputer built from PCs (Cluster\textsubscript{PCs}) that comprises 10 multi-core computers with Intel Core i5-760 Processor (8M Cache, 2.80 GHz) and 2 GB of RAM memory, connected via a Gigabit Ethernet TCP/IP local network, - regular cluster that comprises 1 head node with Intel Xeon E5410@2.33GHz, 16GB of RAM memory and 10 processing nodes with Intel Xeon E5205@1.86GHz, 6GB of RAM memory, connected via a Gigabit Ethernet TCP/IP local network. Furthermore, to examine communication costs, an implementation of the model on a single multi-core PC was tested, too. 5.1. Tests which examine implementation of the model in distributed environments The algorithm scalability depends on the number of HRs because it is related to the number of schedule modifications. The number of independent requests, and consequently the need for workers, increases along with the increase of the number of HRs. Influence of changing the number of workers on the computation time towards the number of tasks is shown in Figure 3. In both distributed environments, the computation time significantly falls as the number of workers grows. Decline is particularly visible when only few workers are used. Finally, the computation time exceeds its minimum, no matter how many workers is used. In both environments, also the increase of the number of tasks influences the drop of the scheduling time. However, the cluster, despite slower CPUs, copes better along with the increase of the number of tasks. In the cluster, the growth of the scheduling time in more complex projects is slower, especially when only few workers are used. In general, a reduction of the computation time looks similar in both environments. It is worth noticing that, the computation time was reduced even to 6% of sequential computation time for the project with 60 tasks and 12 HRs (Figure 3b, left column). A CPU usage in ClusterPCs during scheduling of a project with 35 tasks and 16 HRs was examined (Figure 4). The CPU usage was monitored every 50 ms and the reads were averaged at the end of calculations. More frequent reads could influence the processor load. The number of HRs was chosen so that enough simultaneous attempts were provided to make workers busy. PCs were running 4 workers each (one worker was assigned to every core). Figure 3 Computation times compared with the number of workers for constant number of HRs (left column – ClusterPCs, right column – the Cluster) Figure 4 illustrates how the schedule modification requests spread over the available PCs. CPU usage on PC #1 is almost 100% but only when 4 workers are used. If the number of workers increases, the load is balanced by the use of the other PCs. The distributed algorithm scales well because the computational tasks may be uniformly split among workers. Summing up the cores usage (counted in 100%), it grows from 3.7 cores for 4 workers to 9.48 cores for 36 workers. The total core usage together with the tasks dispatcher was 10.02. Hence, the scheduling time was reduced 10 times by the use of 40 cores on 10 PCs. 5.2. Tests which examine the influence of the communication cost on algorithm performance Distributed tests were executed in order to examine how the network latency influences the algorithm performance. To that end, 4 workers were run on the ClusterPCs that comprises 2 multi-core PCs and compared with 4 workers on 2 processing nodes in the cluster and 4 workers on a single PC (so called LocalPC). All workers were using RMI for communication. At first, the number of modification requests was counted with respect to the number of resources and the number of tasks (Table 1). Table 1 The number of modification requests <table> <thead> <tr> <th>No. resources</th> <th>No. task</th> </tr> </thead> <tbody> <tr> <td></td> <td>30</td> </tr> <tr> <td>10</td> <td>634</td> </tr> <tr> <td>12</td> <td>765</td> </tr> <tr> <td>14</td> <td>1009</td> </tr> <tr> <td>16</td> <td>1412</td> </tr> </tbody> </table> The number of requests increases as the number of resources increases and varies along with the increase of the number of tasks. However, the more requests are sent, the greater will be the impact of communication cost on the performance. The average scheduling time for a project with 30 tasks is shown in Table 2. Table 2 Average time of transferring data between the tasks dispatcher and workers for a project with 30 tasks [ms] (Remote - workers located on 2 remote computers, Local - workers located on the same machine, Resnum - No. resources). <table> <thead> <tr> <th>Resnum</th> <th>cluster</th> <th>ClusterPCs</th> <th>LocalPC</th> <th>Threads</th> </tr> </thead> <tbody> <tr> <td>2</td> <td>3</td> <td>4</td> <td>2</td> <td>3</td> </tr> <tr> <td>10</td> <td>5587</td> <td>3922</td> <td>2949</td> <td>3355</td> </tr> <tr> <td>12</td> <td>7242</td> <td>4825</td> <td>3827</td> <td>4016</td> </tr> <tr> <td>14</td> <td>9677</td> <td>6427</td> <td>5137</td> <td>5190</td> </tr> <tr> <td>16</td> <td>12911</td> <td>8548</td> <td>6555</td> <td>7745</td> </tr> </tbody> </table> It is clear that the scheduling time decreases when the number of workers grows. Yet, the decline is very low between 3 and 4 workers in the LocalPC because computer resources start to be overloaded when 4 workers and the tasks dispatcher run on the same machine. On average, the LocalPC is about 13% faster than the corresponding ClusterPCs (for less than 4 workers), due to low communication costs. On the other hand, the ClusterPCs is better when the number of workers exceeds the number of processor cores. It is also not limited to the number of workers. But even the usage of 4 workers reduced the scheduling time by 54% in the ClusterPCs and by 48% in the cluster, in the project with 30 tasks and 10 HRs. However, the reduction ratio in the former decreases along with the increasing number of resources and does not change in the latter. It means that the cluster copes better than PCs also with the increase of the number of resources. The average time of transferring data between the tasks dispatcher and workers for a project with 30 tasks is shown in Table 3. It increases when the number of tasks increases because more data needs to be transferred. It also increases when the number of resources increases due to increased number of requests that the tasks dispatcher has to handle. Table 3 Average time of transferring data between the tasks dispatcher and workers for a project with 30 tasks [ms] (Remote - workers located on 2 remote computers, Local - workers located on the same machine, Resnum - No. resources). <table> <thead> <tr> <th>Resnum</th> <th>cluster</th> <th>ClusterPCs</th> <th>LocalPC</th> <th>Threads</th> </tr> </thead> <tbody> <tr> <td>30</td> <td>35</td> <td>40</td> <td>30</td> <td>35</td> </tr> <tr> <td>10</td> <td>5,62</td> <td>6,41</td> <td>6,22</td> <td>5,84</td> </tr> <tr> <td>12</td> <td>5,62</td> <td>6,41</td> <td>6,22</td> <td>5,96</td> </tr> <tr> <td>14</td> <td>5,66</td> <td>5,66</td> <td>6,29</td> <td>6,06</td> </tr> <tr> <td>16</td> <td>5,77</td> <td>5,72</td> <td>6,31</td> <td>6,49</td> </tr> </tbody> </table> Yet, the increase of the time is much faster in the ClusterPCs, than in the cluster. Consequently, the data transfer in the ClusterPCs gets slower in the projects with more than 35 tasks and 10 HRs. On average, the data transfer is about 2,2 times slower in the ClusterPCs than within a single multi-core PC. On a single machine, it may be further reduced to less than 0,5 ms by the use of threads instead of processes in LocalPC (so called Threads). Threads are much lighter than processes and share the process' resources. Thus, even if only one multi-core machine is available, the scheduling time with the use of 4 workers may be reduced by about 47%. The scheduling time on a single machine with the use of 4 threads is relevant to the scheduling in ClusterPCs on 2 multi-core PCs with 4 workers on each. But still, if the need for workers is greater, the ClusterPCs is better. Moreover, running more threads than 5 on a 4-core processor is not so efficient. Comparison results of time needed to transfer data between the tasks dispatcher and 3 workers, averaged from all attempts, are shown on Figure 5. ![Figure 5](image) Figure 5 Comparison results of time needed to transferring data between the tasks dispatcher and 3 workers averaged from all attempts [ms] 6. CONCLUSIONS In the research, a distributed model was used in order to reduce the computation time for a solution of the RCPSP when resources are partially available. An implementation of the model on a multicomputer built from PCs was tested and compared with regular implementation of the model on a cluster. The tasks dispatcher and workers were connected through a local network and were using RMI for communication. The tasks dispatcher was using multithreading for spreading and gathering data while, at the same time, workers were calculating different schedule modifications and sending back the results. The workers were run on remote computers as independent processes and hence did not have to be synchronized. Workers were gathered in a pool managed by the tasks dispatcher and were available for a direct use. The best efficiency was obtained when there were as many processes running as the number of computer cores. Hence, the more cores inside the computer, the more workers can run on it and fewer PCs are needed. Consequently, the more workers the shorter the computation time, but only when there is enough work to do for the workers. Too few workers cannot handle rapidly growing calculation requests after the first stage of the algorithm. The maximum number of workers depends on the number of HRs because it is related to the number of schedule modifications. Thus, the project scheduling cannot be speed up if there is a lot of resources and not enough workers and vice versa. The research showed that the multicomputer built from multi-core PCs may be successfully used for reduction of the scheduling time. Obtained results are comparable with the cluster. In both environments the reduction of time looks similar. However, the cluster copes better along with the increase of the number of tasks and the number of resources. In the cluster the communication cost is lower than in the ClusterPCs, in the projects with more than 35 tasks and 10 HRs. On a single machine, the scheduling time is about 13%, faster than through a local network (for less than 4 workers) due to lack of the network latency. It can be further reduced by about 47% by the use of threads instead of processes. However, the computer resources start to be overloaded when the tasks dispatcher and more than 3 processes or more than 5 threads run on the same 4-core processor. Therefore, the ClusterPCS outperforms the LocalPC when more than 3 workers and the usage of threads when more than 7 workers are used. The experimental results showed that the distributed model is well-balanced. The computational tasks are uniformly splitted among workers. If the number of workers increases, the load spreads over the available PCs. The distributed algorithm scales well, adjusting to the number of workers. Moreover, if any of the workers crashes, its task will be taken over by another worker and the proceeding will be continued. Various complexities of the projects were tested. However in each, the scheduling time was significantly reduced by the distributed calculations, even up to 6% of sequential time. In comparison to the sequential computing, the number of used cores (counted in 100%) was 10 times higher, during scheduling of a project with 30 tasks and 16 HRs by 36 workers. LITERATURE
{"Source-Url": "http://journals.umcs.pl/ai/article/download/3501/pdf", "len_cl100k_base": 5676, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 37583, "total-output-tokens": 7807, "length": "2e12", "weborganizer": {"__label__adult": 0.00027680397033691406, "__label__art_design": 0.0004146099090576172, "__label__crime_law": 0.0004630088806152344, "__label__education_jobs": 0.0028896331787109375, "__label__entertainment": 9.846687316894533e-05, "__label__fashion_beauty": 0.00018334388732910156, "__label__finance_business": 0.0009365081787109376, "__label__food_dining": 0.00034737586975097656, "__label__games": 0.0008044242858886719, "__label__hardware": 0.0029315948486328125, "__label__health": 0.0006604194641113281, "__label__history": 0.000431060791015625, "__label__home_hobbies": 0.0001962184906005859, "__label__industrial": 0.0013294219970703125, "__label__literature": 0.00022411346435546875, "__label__politics": 0.00034427642822265625, "__label__religion": 0.0004572868347167969, "__label__science_tech": 0.42626953125, "__label__social_life": 0.00014007091522216797, "__label__software": 0.0244293212890625, "__label__software_dev": 0.53466796875, "__label__sports_fitness": 0.0002932548522949219, "__label__transportation": 0.0008044242858886719, "__label__travel": 0.0002486705780029297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30841, 0.06805]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30841, 0.4499]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30841, 0.92898]], "google_gemma-3-12b-it_contains_pii": [[0, 2644, false], [2644, 5983, null], [5983, 9471, null], [9471, 11444, null], [11444, 13082, null], [13082, 16339, null], [16339, 16919, null], [16919, 18793, null], [18793, 22438, null], [22438, 25128, null], [25128, 28343, null], [28343, 30841, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2644, true], [2644, 5983, null], [5983, 9471, null], [9471, 11444, null], [11444, 13082, null], [13082, 16339, null], [16339, 16919, null], [16919, 18793, null], [18793, 22438, null], [22438, 25128, null], [25128, 28343, null], [28343, 30841, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30841, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30841, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30841, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30841, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30841, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30841, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30841, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30841, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30841, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30841, null]], "pdf_page_numbers": [[0, 2644, 1], [2644, 5983, 2], [5983, 9471, 3], [9471, 11444, 4], [11444, 13082, 5], [13082, 16339, 6], [16339, 16919, 7], [16919, 18793, 8], [18793, 22438, 9], [22438, 25128, 10], [25128, 28343, 11], [28343, 30841, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30841, 0.20792]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
b436749a3fd1ff8dbbadb4117c5bb1b13e51ddfe
Algorithm Design - analysis of algorithms - greed - reduction - dynamic programming - divide-and-conquer - randomization https://algs4.cs.princeton.edu Algorithm design Algorithm design patterns. - Analysis of algorithms. - Greed. - Reduction. - Dynamic programming. - Divide-and-conquer. - Randomization. Want more? See COS 240, COS 343, COS 423, COS 445, COS 451, MAT 375, ... ALGORITHM DESIGN - analysis of algorithms - greed - reduction - dynamic programming - divide-and-conquer - randomization https://algs4.cs.princeton.edu Goal. Find $T$ using fewest drops. **Goal.** Find $T$ using fewest drops. **Rules.** - An egg that breaks cannot be reused. - An egg that survives a fall can be reused. - The effect of a drop is the same for all eggs. - An egg can break on floor 1 or survive on floor $n$. ![Diagram of egg drop problem] **Goal.** Find $T$ using fewest drops. **Variant 0.** 1 egg. **Solution.** Use **sequential search**: drop on floors 1, 2, 3, … until egg breaks. **Analysis.** 1 egg and $\leq n$ drops. **Analysis.** 1 egg and $T$ drops. # drops depends on a parameter that you don't know a priori **Goal.** Find $T$ using fewest drops. **Variant 1.** $\infty$ eggs. **Solution.** Binary search for $T$. - Initialize $[lo, hi] = [0, n+1]$. - Maintain invariant: egg breaks on floor $hi$ but not on $lo$. - Repeat until length of interval is 1: - drop on floor $mid = \lfloor (lo + hi) / 2 \rfloor$. - if it breaks, update $hi = mid$. - otherwise, update $lo = mid$. **Analysis.** $\sim \log_2 n$ eggs, $\sim \log_2 n$ drops. *Suppose $T$ is much smaller than $n$. Can you guarantee $\Theta(\log T)$ drops?* Goal. Find $T$ using fewest drops. Variant 1'. $\infty$ eggs and $\Theta(\log T)$ drops. Solution. Use repeated doubling; then binary search. - Drop on floors 1, 2, 4, 8, 16, ..., $x$ to find a floor $x$ such that the egg breaks on floor $x$ but not on $\frac{1}{2}x$. - Binary search in interval $[\frac{1}{2}x, x]$. Analysis. $\sim \log_2 T$ eggs, $\sim 2 \log_2 T$ drops. - Repeated doubling: 1 egg and $1 + \log_2 x$ drops. - Binary search: $\sim \log_2 x$ eggs and $\sim \log_2 x$ drops. - Observe that $T \leq x < 2T$. Goal. Find $T$ using fewest drops. Variant 2. 2 eggs. As a function of $n$, what is the fewest drops that an algorithm can guarantee? A. $\Theta(1)$ B. $\Theta(\log n)$ C. $\Theta(\sqrt{n})$ D. $\Theta(n)$ **Egg Drop (Asymmetric Search)** **Goal.** Find $T$ using fewest drops. **Variant 2.** 2 eggs. **Solution.** Use gridding; then sequential search. - Drop at floors $\sqrt{n}$, $2\sqrt{n}$, $3\sqrt{n}$, ... until first egg breaks, say at floor $c\sqrt{n}$. - Sequential search in interval $[(c-1)\sqrt{n}, c\sqrt{n}]$. **Analysis.** At most $2\sqrt{n}$ drops. - First egg: $\leq \sqrt{n}$ drops. - Second egg: $\leq \sqrt{n}$ drops. **Signing bonus 1.** Use 2 eggs and at most $\sqrt{2n}$ drops. **Signing bonus 2.** Use 2 eggs and $O(\sqrt{T})$ drops. **Signing bonus 3.** Use 3 eggs and $O(n^{1/3})$ drops. ALGORITHM DESIGN - analysis of algorithms - greed - reduction - dynamic programming - divide-and-conquer - randomization Greedy algorithms Make locally optimal, irrevocable, choices at each step. Familiar examples. - Prim’s algorithm. [for MST] - Kruskal’s algorithm. [for MST] - Dijkstra’s algorithm. [for shortest paths] More classic examples. - A* search algorithm. - Huffman’s algorithm for data compression. - Gale-Shapley algorithm for stable marriage. - Greedy algorithm for matroids. - ... Caveat. Greedy algorithms rarely lead to provably optimal solutions. [ but often used anyway in practice, especially for intractable problems ] **Goal.** Given U. S. coin denominations \{ 1, 5, 10, 25, 100 \}, devise a method to pay amount to customer using fewest coins. **Ex.** 34¢. 6 coins **Cashier’s (greedy) algorithm.** Repeatedly add the coin of the largest value that does not exceed the remaining amount to be paid. **Ex.** $2.89. 10 coins Is the cashier's algorithm optimal for U.S. coin denominations \{ 1, 5, 10, 25, 100 \}? A. Yes, greedy algorithms are always optimal. B. Yes, for any set of coin denominations \( d_1 < d_2 < \ldots < d_n \) provided \( d_1 = 1 \). C. Yes, because of special properties of U.S. coin denominations. D. No. Properties of any optimal solution (for U.S. coin denominations) Property 1. Number of pennies \( P \leq 4 \). \[ \text{Pf. Replace 5 pennies with 1 nickel.} \] Property 2. Number of nickels \( N \leq 1 \). \[ \text{Property 3. Number of dimes } D \leq 2. \] \[ \text{Property 4. Number of quarters } Q \leq 3. \] Property 5. \( N + D \leq 2 \). \[ \text{Pf.} \] - Properties 2 and 3 \( \Rightarrow N \leq 1 \) and \( D \leq 2 \). - If \( N = 1 \) and \( D = 2 \), replace with 1 quarter. \[ \text{Property 6. } P + 5N + 10D + 25Q \leq 99. \] \[ \begin{align*} \text{P1 } & \Rightarrow \text{contributes at most 4} \\ \text{P5 } & \Rightarrow \text{contributes at most 20} \\ \text{P4 } & \Rightarrow \text{contributes at most 75} \end{align*} \] Optimality of cashier’s algorithm (for U.S. coin denominations) Proposition. Cashier’s algorithm yields unique optimal solution for denominations \{ 1, 5, 10, 25, 100 \}. Pf. [ for dollar coins ] • Suppose we are changing amount $x.yz$. • Cashier’s algorithm takes \( x \) dollar coins. • Suppose (for the sake of contradiction) that an optimal solution takes fewer than \( x \) dollar coins. • Then, optimal solution satisfies \( P + 5N + 10D + 25Q \geq 100 \). • This contradicts Property 6: \[ P + 5N + 10D + 25Q \leq 99 \] must make change for \( \geq 100\)¢ using only pennies, nickels, dimes, and quarters [ similar arguments justify greedy strategy for quarters, dimes, and nickels ] Algorithm Design - analysis of algorithms - greed - reduction - dynamic programming - divide-and-conquer - randomization https://algs4.cs.princeton.edu Reductions Problem $X$ reduces to problem $Y$ if you can solve $X$ by using an algorithm for $Y$. **Ex 1.** Finding the median reduces to sorting. **Ex 2.** Bipartite matching reduces to maxflow. Many many problems reduce to: - Sorting. - Maxflow. - Suffix array. \( \text{see COS 343} \) - Shortest path. - Minimum spanning tree. - Linear/semidefinite programming. \( \text{see ORF 307 or ORF 363} \) - ... **Note.** Reductions also play central role in computational complexity (e.g., NP-completeness). **Goal.** Given a digraph, where each edge has a positive weight and is colored orange or black, find shortest path from $s$ to $t$ that uses at most $k$ orange edges. \[G\] - $k = 0$: $s \rightarrow 1 \rightarrow t$ (17) - $k = 1$: $s \rightarrow 3 \rightarrow t$ (13) - $k = 2$: $s \rightarrow 2 \rightarrow 3 \rightarrow t$ (11) - $k = 3$: $s \rightarrow 2 \rightarrow 1 \rightarrow 3 \rightarrow t$ (10) - $k = 4$: $s \rightarrow 2 \rightarrow 1 \rightarrow 3 \rightarrow t$ (10) **SHORTEST PATH WITH ORANGE AND BLACK EDGES** **Goal.** Given a digraph, where each edge has a positive weight and is colored orange or black, find shortest path from $s$ to $t$ that uses at most $k$ orange edges. A reduction to shortest paths: - Create $k+1$ copies of the vertices in digraph $G$, labeled $G_0, G_1, \ldots, G_k$. - For each black edge $v \rightarrow w$: add edge from vertex $v$ in graph $G_i$ to vertex $w$ in $G_i$. - For each orange edge $v \rightarrow w$: add edge from vertex $v$ in graph $G_i$ to vertex $w$ in $G_{i+1}$. - Compute shortest path from $s$ to any copy of $t$. ![Diagram of Reduction](diagram.png) $k = 2$ What is worst-case running time of algorithm as a function of $k$, the number of vertices $V$, and the number of edges $E$? Assume $E \geq V$ and $k > 0$. A. $\Theta(E \log V)$ B. $\Theta(k \ E)$ C. $\Theta(k \ E \log V)$ D. $\Theta(k^2 \ E \log V)$ Algorithm Design - analysis of algorithms - greed - reduction - dynamic programming - divide-and-conquer - randomization https://algs4.cs.princeton.edu Dynamic programming - Break up problem into a series of overlapping subproblems. - Build up solutions to larger and larger subproblems. [ caching solutions to subproblems in a table for later reuse ] Familiar examples. - Bellman–Ford. - Seam carving. - Shortest paths in DAGs. More classic examples. - Unix diff. - Viterbi algorithm for hidden Markov models. - CKY algorithm for parsing context-free grammars. - Needleman–Wunsch/Smith–Waterman for DNA sequence alignment. - ... **Goal.** Paint a row of $n$ houses red, green, or blue so that: - Minimize total cost, where $\text{cost}(i, \text{color})$ is cost to paint $i$ given color. - No two adjacent houses have the same color. <table> <thead> <tr> <th></th> <th>1</th> <th>2</th> <th>3</th> <th>4</th> <th>5</th> <th>6</th> </tr> </thead> <tbody> <tr> <td>$\text{cost}(i, \text{red})$</td> <td>7</td> <td>6</td> <td>7</td> <td>8</td> <td>9</td> <td>20</td> </tr> <tr> <td>$\text{cost}(i, \text{green})$</td> <td>3</td> <td>8</td> <td>9</td> <td>22</td> <td>12</td> <td>8</td> </tr> <tr> <td>$\text{cost}(i, \text{blue})$</td> <td>16</td> <td>10</td> <td>4</td> <td>2</td> <td>5</td> <td>7</td> </tr> </tbody> </table> **cost to paint house $i$ the given color** \[(3 + 6 + 4 + 8 + 5 + 8 = 34)\] **House coloring problem: dynamic programming formulation** **Goal.** Paint a row of $n$ houses red, green, or blue so that: - Minimize total cost, where $\text{cost}(i, \text{color})$ is cost to paint $i$ given color. - No two adjacent houses have the same color. **Subproblems.** - $R(i) = \min \text{ cost to paint houses } 1, \ldots, i \text{ with } i \text{ red}.$ - $G(i) = \min \text{ cost to paint houses } 1, \ldots, i \text{ with } i \text{ green}.$ - $B(i) = \min \text{ cost to paint houses } 1, \ldots, i \text{ with } i \text{ blue}.$ - Optimal cost $= \min \{ R(n), G(n), B(n) \}.$ **Dynamic programming recurrence.** - $R(0) = G(0) = B(0) = 0$ - $R(i) = \text{cost}(i, \text{red}) + \min \{ G(i-1), B(i-1) \}$ - $G(i) = \text{cost}(i, \text{green}) + \min \{ B(i-1), R(i-1) \}$ - $B(i) = \text{cost}(i, \text{blue}) + \min \{ R(i-1), G(i-1) \}$ \(\text{“optimal substructure” (optimal solution can be constructed from optimal solutions to smaller subproblems)}\) Bottom-up DP trace. Given $R(i)$, $G(i)$, and $B(i)$, easy to compute $R(i+1)$, $G(i+1)$, and $B(i+1)$. \[ B(6) = \text{cost}(6, \text{blue}) + \min \{ R(5), G(5) \} \] \[ = 7 + \min \{ 29, 32 \} \] \[ = 36 \] Bottom-up DP implementation. ```java int[] r = new int[n+1]; int[] g = new int[n+1]; int[] b = new int[n+1]; for (int i = 1; i <= n; i++) { r[i] = cost[i][RED] + Math.min(g[i-1], b[i-1]); g[i] = cost[i][GREEN] + Math.min(b[i-1], r[i-1]); b[i] = cost[i][BLUE] + Math.min(r[i-1], g[i-1]); } return min3(r[n], g[n], b[n]); ``` Proposition. Takes $\Theta(n)$ time and uses $\Theta(n)$ extra space. Algorithm Design - analysis of algorithms - greed - reduction - dynamic programming - divide-and-conquer - randomization https://algs4.cs.princeton.edu Divide and conquer - Break up problem into two or more independent subproblems. - Solve each subproblem recursively. - Combine solutions to subproblems to form solution to original problem. Familiar examples. - Mergesort. - Quicksort. More classic examples. - Closest pair. - Convolution and FFT. - Matrix multiplication. - Integer multiplication. Prototypical usage. Turn brute-force $\Theta(n^2)$ algorithm into $\Theta(n \log n)$ one. Personalized recommendations Music site tries to match your song preferences with others. - Your ranking of songs: 0, 1, ..., n−1. - My ranking of songs: \( a_0, a_1, \ldots, a_{n-1} \). - Music site consults database to find people with similar tastes. **Kendall–tau distance.** Number of **inversions** between two rankings. **Inversion.** Songs \( i \) and \( j \) are inverted if \( i < j \), but \( a_i > a_j \). <table> <thead> <tr> <th></th> <th>A</th> <th>B</th> <th>C</th> <th>D</th> <th>E</th> <th>F</th> <th>G</th> <th>H</th> </tr> </thead> <tbody> <tr> <td><strong>you</strong></td> <td>0</td> <td>1</td> <td>2</td> <td>3</td> <td>4</td> <td>5</td> <td>6</td> <td>7</td> </tr> <tr> <td><strong>me</strong></td> <td>0</td> <td>2</td> <td>3</td> <td>1</td> <td>4</td> <td>5</td> <td>7</td> <td>6</td> </tr> </tbody> </table> 3 inversions: 2–1, 3–1, 7–6 **Problem.** Given a permutation of length $n$, count the number of inversions. <table> <thead> <tr> <th></th> <th>0</th> <th>2</th> <th>3</th> <th>1</th> <th>4</th> <th>5</th> <th>7</th> <th>6</th> </tr> </thead> </table> 3 inversions: 2–1, 3–1, 7–6 **Brute-force $\Theta(n^2)$ algorithm.** For each $i < j$, check if $a_i > a_j$. **A bit better.** Run insertion sort; return number of exchanges. **Goal.** $\Theta(n \log n)$ time (or better). ## Counting Inversions: Divide-and-Conquer <table> <thead> <tr> <th>input</th> <th>0</th> <th>4</th> <th>3</th> <th>7</th> <th>9</th> <th>1</th> <th>5</th> <th>8</th> <th>2</th> <th>6</th> </tr> </thead> </table> <table> <thead> <tr> <th>count inversions in left subarray</th> <th>0</th> <th>3</th> <th>4</th> <th>7</th> <th>9</th> <th>1</th> <th>5</th> <th>8</th> <th>2</th> <th>6</th> </tr> </thead> <tbody> <tr> <td>4−3</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> <table> <thead> <tr> <th>count inversions in right subarray</th> <th>0</th> <th>3</th> <th>4</th> <th>7</th> <th>9</th> <th>1</th> <th>2</th> <th>5</th> <th>6</th> <th>8</th> </tr> </thead> <tbody> <tr> <td>5−2 8−2 8−6</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> <table> <thead> <tr> <th>count inversions with one element in each subarray</th> <th>0</th> <th>3</th> <th>4</th> <th>7</th> <th>9</th> <th>1</th> <th>2</th> <th>5</th> <th>6</th> <th>8</th> </tr> </thead> <tbody> <tr> <td>3−1 3−2 4−1 4−2 7−1 7−2 7−5 7−6 9−1 9−2 9−5 9−6 9−8</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> <table> <thead> <tr> <th>output</th> <th>0</th> <th>1</th> <th>2</th> <th>3</th> <th>4</th> <th>5</th> <th>6</th> <th>7</th> <th>8</th> <th>9</th> </tr> </thead> <tbody> <tr> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> 1 + 3 + 13 = 17 This step seems to require \( \Theta(n^2) \) time. ### Counting Inversions: Divide-and-Conquer #### Input | 0 | 4 | 3 | 7 | 9 | 1 | 5 | 8 | 2 | 6 | #### Count Inversions in Left Subarray and Sort | 0 | 3 | 4 | 7 | 9 | 1 | 5 | 8 | 2 | 6 | #### Count Inversions in Right Subarray and Sort | 0 | 3 | 4 | 7 | 9 | 1 | 2 | 5 | 6 | 8 | #### Count Inversions with One Element in Each Sorted Subarray | 0 | 3 | 4 | 7 | 9 | 1 | 2 | 5 | 6 | 8 | #### And Merge into Sorted Whole | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | What is running time of algorithm as a function of $n$? A. $\Theta(n)$ B. $\Theta(n \log n)$ C. $\Theta(n \log^2 n)$ D. $\Theta(n^2)$ Algorithm Design - analysis of algorithms - greed - reduction - dynamic programming - divide-and-conquer - randomization Randomized algorithms Algorithm whose performance (or output) depends on the results of random coin flips. Familiar examples. - Quicksort. - Quickselect. - Karger’s algorithm. More classic examples. - Miller–Rabin primality testing. - Rabin–Karp substring search. - Polynomial identity testing. - Volume of convex body. - Universal hashing. - … Problem. A disorganized carpenter has a mixed pile of $n$ nuts and $n$ bolts. - The goal is to find the corresponding pairs of nuts and bolts. - Each nut fits exactly one bolt; each bolt fits exactly one nut. - By fitting a nut and a bolt together, the carpenter can determine which is bigger. Brute-force algorithm. Compare each bolt to each nut: $\Theta(n^2)$ compares. Challenge. Design an algorithm that makes $O(n \log n)$ compares. **Shuffle.** Shuffle the nuts and bolts. **Partition.** - Pick leftmost bolt \( x \) and compare against all nuts; divide nuts smaller than \( x \) from those that are larger than \( x \). - Let \( x' \) be the nut that matches bolt \( x \). Compare \( x' \) against all bolts; divide bolts smaller than \( x' \) from those that are larger than \( x' \). **Divide-and-conquer.** Recursively solve two independent subproblems. What is the expected running time of the randomized algorithm as a function of \( n \)? A. \( \Theta(n) \) B. \( \Theta(n \log n) \) C. \( \Theta(n \log^2 n) \) D. \( \Theta(n^2) \) Hiring bonus. Design algorithm that takes $O(n \log n)$ time in the worst case. Chapter 27 Matching Nuts and Bolts in $O(n \log n)$ Time (Extended Abstract) János Komlós 1,4 Yuan Ma 2 Endre Szeméredi 3,4 Abstract Given a set of $n$ nuts of distinct widths and a set of $n$ bolts such that each nut corresponds to a unique bolt of the same width, how should we match every nut with its corresponding bolt by comparing nuts with bolts (no comparison is allowed between two nuts or between two bolts)? The problem can be naturally viewed as a variant of the classic sorting problem as follows. Given two lists of $n$ numbers each such that one list is a permutation of the other, how should we sort the lists by comparisons only between numbers in different lists? We give an $O(n \log n)$-time deterministic algorithm for the problem. This is optimal up to a constant factor and answers an open question posed by Alon, Blum, Fiat, Kannan, Naor, and Ostrovsky [3]. Moreover, when copies of nuts and bolts are allowed, our algorithm runs in optimal $O(\log n)$ time on $n$ processors in Valiant’s parallel comparison tree model. Our algorithm is based on the AKS sorting algorithm with substantial modifications. Algorithm Design - analysis of algorithms - greed - reduction - dynamic programming - divide-and-conquer - randomization - credits Credits Co-instructors and graduate student preceptors. Pedro Paredes Marcel Dall’Agnoll Bob Tarjan Natalia K. Dongsheng Yang Sabhya Chhabria Wei Luo Malinda Huang Shelley Xia Undergrad graders and lab TAs. Apply to be one next semester! “Algorithms and data structures are love. Algorithms and data structures are life.” — anonymous COS 226 student <table> <thead> <tr> <th>image</th> <th>source</th> <th>license</th> </tr> </thead> <tbody> <tr> <td><strong>Egg Drop</strong></td> <td>New York Times</td> <td></td> </tr> <tr> <td><strong>Broken Egg</strong></td> <td>Adobe Stock</td> <td>education license</td> </tr> <tr> <td><strong>Greed is Good</strong></td> <td>Dennis Dugan</td> <td></td> </tr> <tr> <td><strong>Coin Changing</strong></td> <td>unknown</td> <td></td> </tr> <tr> <td><strong>U.S. Coins</strong></td> <td>Adobe Stock</td> <td>education license</td> </tr> <tr> <td><strong>Cash Register</strong></td> <td>Adobe Stock</td> <td>education license</td> </tr> <tr> <td><strong>Divide-and-Conquer T-Shirt</strong></td> <td>Zazzle</td> <td></td> </tr> <tr> <td><strong>Coin Toss</strong></td> <td>clipground.com</td> <td>CC BY 4.0</td> </tr> <tr> <td><strong>Nuts and Bolts</strong></td> <td>Adobe Stock</td> <td>education license</td> </tr> </tbody> </table>
{"Source-Url": "https://www.cs.princeton.edu/courses/archive/fall23/cos226/lectures/AlgorithmDesign.pdf", "len_cl100k_base": 6450, "olmocr-version": "0.1.53", "pdf-total-pages": 45, "total-fallback-pages": 0, "total-input-tokens": 78890, "total-output-tokens": 7806, "length": "2e12", "weborganizer": {"__label__adult": 0.0006647109985351562, "__label__art_design": 0.0006570816040039062, "__label__crime_law": 0.0007996559143066406, "__label__education_jobs": 0.0030078887939453125, "__label__entertainment": 0.00015306472778320312, "__label__fashion_beauty": 0.0003306865692138672, "__label__finance_business": 0.000316619873046875, "__label__food_dining": 0.0007414817810058594, "__label__games": 0.0018672943115234375, "__label__hardware": 0.00225067138671875, "__label__health": 0.0014743804931640625, "__label__history": 0.0005726814270019531, "__label__home_hobbies": 0.00023567676544189453, "__label__industrial": 0.0007081031799316406, "__label__literature": 0.0005326271057128906, "__label__politics": 0.0005240440368652344, "__label__religion": 0.0008807182312011719, "__label__science_tech": 0.09722900390625, "__label__social_life": 0.00019502639770507812, "__label__software": 0.003932952880859375, "__label__software_dev": 0.88037109375, "__label__sports_fitness": 0.00079345703125, "__label__transportation": 0.0014400482177734375, "__label__travel": 0.00033402442932128906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 17937, 0.02936]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 17937, 0.1453]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 17937, 0.69883]], "google_gemma-3-12b-it_contains_pii": [[0, 154, false], [154, 383, null], [383, 383, null], [383, 537, null], [537, 572, null], [572, 844, null], [844, 1130, null], [1130, 1649, null], [1649, 2180, null], [2180, 2389, null], [2389, 3002, null], [3002, 3124, null], [3124, 3655, null], [3655, 3966, null], [3966, 4272, null], [4272, 5024, null], [5024, 5722, null], [5722, 5876, null], [5876, 6388, null], [6388, 6874, null], [6874, 7525, null], [7525, 7779, null], [7779, 7933, null], [7933, 8416, null], [8416, 8941, null], [8941, 9927, null], [9927, 10138, null], [10138, 10548, null], [10548, 10702, null], [10702, 11144, null], [11144, 11763, null], [11763, 12146, null], [12146, 13269, null], [13269, 13737, null], [13737, 13872, null], [13872, 13994, null], [13994, 14342, null], [14342, 14783, null], [14783, 15215, null], [15215, 15401, null], [15401, 16614, null], [16614, 16746, null], [16746, 16996, null], [16996, 17109, null], [17109, 17937, null]], "google_gemma-3-12b-it_is_public_document": [[0, 154, true], [154, 383, null], [383, 383, null], [383, 537, null], [537, 572, null], [572, 844, null], [844, 1130, null], [1130, 1649, null], [1649, 2180, null], [2180, 2389, null], [2389, 3002, null], [3002, 3124, null], [3124, 3655, null], [3655, 3966, null], [3966, 4272, null], [4272, 5024, null], [5024, 5722, null], [5722, 5876, null], [5876, 6388, null], [6388, 6874, null], [6874, 7525, null], [7525, 7779, null], [7779, 7933, null], [7933, 8416, null], [8416, 8941, null], [8941, 9927, null], [9927, 10138, null], [10138, 10548, null], [10548, 10702, null], [10702, 11144, null], [11144, 11763, null], [11763, 12146, null], [12146, 13269, null], [13269, 13737, null], [13737, 13872, null], [13872, 13994, null], [13994, 14342, null], [14342, 14783, null], [14783, 15215, null], [15215, 15401, null], [15401, 16614, null], [16614, 16746, null], [16746, 16996, null], [16996, 17109, null], [17109, 17937, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 17937, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 17937, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 17937, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 17937, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 17937, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 17937, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 17937, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 17937, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 17937, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 17937, null]], "pdf_page_numbers": [[0, 154, 1], [154, 383, 2], [383, 383, 3], [383, 537, 4], [537, 572, 5], [572, 844, 6], [844, 1130, 7], [1130, 1649, 8], [1649, 2180, 9], [2180, 2389, 10], [2389, 3002, 11], [3002, 3124, 12], [3124, 3655, 13], [3655, 3966, 14], [3966, 4272, 15], [4272, 5024, 16], [5024, 5722, 17], [5722, 5876, 18], [5876, 6388, 19], [6388, 6874, 20], [6874, 7525, 21], [7525, 7779, 22], [7779, 7933, 23], [7933, 8416, 24], [8416, 8941, 25], [8941, 9927, 26], [9927, 10138, 27], [10138, 10548, 28], [10548, 10702, 29], [10702, 11144, 30], [11144, 11763, 31], [11763, 12146, 32], [12146, 13269, 33], [13269, 13737, 34], [13737, 13872, 35], [13872, 13994, 36], [13994, 14342, 37], [14342, 14783, 38], [14783, 15215, 39], [15215, 15401, 40], [15401, 16614, 41], [16614, 16746, 42], [16746, 16996, 43], [16996, 17109, 44], [17109, 17937, 45]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 17937, 0.10354]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
58e661ace827d3f70805a6ce65a5caa0bb7e1fd0
A Parallel Approach to Concolic Testing with Low-cost Synchronization Xiao Yu, Shuai Sun, Geguang Pu and Siyuan Jiang, Zheng Wang Software Engineering Institute, East China Normal University, Shanghai, China Abstract This paper presents a practical approach to parallelize the test data generation algorithm by which computing resources can be fully used. The test data generation approach that we are using is based on the dynamic symbolic execution (concolic testing). The basic idea of parallelizing the algorithm is to distribute analysis processes of different paths to different computing units. Although a centralized scheduler with several sub processes can directly achieve the goal of parallelism, it may cause global idle time when parallel processes frequently end at the same time. In our approach, a runtime deterministic scheduler is introduced to reduce the potential global idle time. Our experiments show some notable results when using a proper scheduling function. Compared with the sequential concolic testing, our approach can save nearly 70% computing time in some cases on a system with eight CPU cores from our experiments. Keywords: Parallel Algorithm, Automatic Test Generation, Symbolic Execution 1 Introduction and Motivation Software testing is the most popular methodology to find bugs. Some tools pay attention to automate the generation of test cases. Pex [17] is an automated unit testing tool that can automatically generate test inputs for program units on .NET platform. CUTE [16] is another tool that can analyze and generate test inputs for C program units. The techniques of automated test case generation tools have experienced a long time of development. Symbolic execution [12], initially introduced in 1970s, uses symbolic values to replace initial concrete inputs while the target program is being executed. By using a constraint solver, it is possible to obtain a set of precise concrete inputs from a set of path constraints to strictly satisfy that path. The basic idea of symbolic execution is novel and useful, which inspires following works on test- Recently, the concept of concolic testing [11,16,8] has been proposed. The concolic testing is a variant of symbolic execution with the advantage that concrete program states are also stored to guide the process of symbolic execution. Constraints of paths are incrementally collected while some of those are replaced by concrete states. It is an enhanced dynamic symbolic execution [13] technique. The constraints collected along one path can be simplified with this technique. It is a novel way to improve the usability and the performance of pure symbolic execution. Based on this improvement, many other techniques [15,7,6,10,5] have been proposed to further improve the usability of concolic testing. Test case generation techniques face with the path-state explosion problem since program paths increase radically with the growth of program scale. For example, the following code fragment has only 35 lines, it tries to find the word ‘web’ and the word ‘ebay’ in a given string with 5 characters. It uses a typical method of state machine. Every loop iteration contains 6 first-level branches, each of which contains 3 or 4 second-level branches. Because the first-level branches are not overlapped with each other, in all, there are 23 branch conditions in one single loop iteration. Considering the looping condition, the feasible paths can be deep and the number of feasible paths can be enormous. We tested this function on an Intel Core i7 platform. The result showed that the sequential concolic test cost about 84.6 seconds to complete the searching of about 3400 feasible paths. ```c void foo(char c[]) { int state = 0, idx = 0; while (c[idx] > 0) { if (state == 0) { if (c[idx] == 'w') state = 1; else if (c[idx] == 'e') state = 2; else state = 0; } else if (state == 1) { if (c[idx] == 'w') state = 1; else if (c[idx] == 'e') state = 3; else state = 0; } else if (state == 2) { if (c[idx] == 'w') state = 1; else if (c[idx] == 'e') state = 2; else if (c[idx] == 'b') state = 4; else state = 0; } else if (state == 3) { ``` if (c[idx] == 'w') state = 1; else if (c[idx] == 'e') state = 2; else if (c[idx] == 'b') state = 6; else state = 0; } else if (state == 4) { if (c[idx] == 'w') state = 1; else if (c[idx] == 'e') state = 2; else if (c[idx] == 'a') state = 5; else state = 0; } else if (state == 5) { if (c[idx] == 'w') state = 1; else if (c[idx] == 'e') state = 2; else if (c[idx] == 'y') state = 7; else state = 0; } else if (state > 5) { printf("\nHit\n"); break; } idx++; } To overcome this problem, some researchers proposed their methods. For instance, Boonstoppel et al. proposed a simple algorithm called RWset [6] to reduce the number of traversed code paths by means of side-effects analysis among variables appeared in paths. Godefroid proposed the SMART [10,5] algorithm to reduce the cost of compositional state explosion problem brought by compositional units in programs. In this paper, we propose a parallel approach to concolic testing. It is a different view of improving usability from other previous work [6,10,5]. Our parallel approach contributes a way to reduce the time cost of the path coverage and computation. The execution along with the condition computation tasks are scheduled to each computation unit on the parallel system by certain scheduling policy, which is path partition in this method. One path execution generates a certain number of new path prefix tasks, which are appended into the task queue. The general idea of this parallel approach is to schedule the tasks to different computation units, and to maintain the global task queue. The distributing method makes sure that all tasks are processed and no task is repeated. It also tries to equalize the computation time over different cores, in order to make the best use of the computational units. Although in the scenario of parallel concolic testing the pattern that consists of a centralized scheduler and several sub processes can directly achieve the goal of parallelism, it may cause global idle time when parallel processes frequently end simultaneously. Rather than building the centralized scheduler, we introduce a deterministic scheduler on each working unit, which considerably reduces the synchronization time cost. The detail of the method will be expanded in the following sections. The main contributions of our work are: - Enhancing parallel capability to traditional concolic testing - Introducing the concept of runtime deterministic scheduler in order to reduce synchronization time - Implementing a parallel concolic testing framework with positive experimental results. In this paper, Section 2 describes the basis of concolic testing and details about how to achieve parallelism. Section 3 shows the results of experiments and the discussion. The last section gives the conclusion. 2 Parallel Approach The parallel approach of test case generation is introduced in this section. 2.1 Background: Concolic Testing Concolic testing [16], which is also referred as the dynamic symbolic execution testing [13], is a variant symbolic execution. The idea is simple but powerful. It combines the symbolic execution and concrete one together to generate test inputs dynamically. By monitoring the path execution of the test unit, the branch conditions along the execution path are collected. In order to explore new paths, conditions of the branches are collected as constraints, using symbols related to the input variables. The symbolic constraint solvers then processed the constraints, returning the solution, which forms the input of the next execution iteration. The above process is iterated until no new path can be generated, which indicates the path tree of this test unit is fully explored. The whole process is the process of test case generation, while each iteration contains one path execution, constraints processing and constraints solving. During the concolic testing of one test unit, two structures are always maintained between two successive iteration. One structure is the global path decision tree $T$, which contains the path recorded from every iteration and shares path information between each iteration. The other is a sequence of values $M$ which provides concrete values for the sequence of input variables $I$. The whole process of sequential concolic testing is within a main iteration. In each iteration, it substitutes the input \( I \) with the value \( M \) into the target program \( P \), and starts to execute \( P \) concretely and symbolically. The result from the execution of target program can be treated as a triple which consists of the execution trace \( t \), the decision path \( p \) and the path feasibility, which is either \( \text{feasible} \) or \( \text{infeasible} \) indicating whether the execution of the target program goes through the expected path or the execution is aborted abnormally. The algorithm terminates when no more expected path prefix can be explored. Some technique details can be found in \([20]\). 2.2 Parallel Model This subsection presents the parallel algorithm of the concolic test case generation. We design a parallel model that makes the process of test case generation run concurrently. An interesting point in this parallel algorithm is that we divide the whole path space into different disjoint areas dynamically that can be managed and updated by different computing units. Thus, each computing unit can freely access and analyze the paths belonging to its own allocated area. This means the global synchronization is fundamentally removed among parallel computing units. This technique can further improve our performance of parallelized concolic testing. The removal of global synchronization is implemented by a runtime task scheduler which allows each computing unit safely updates its own data on a shared global decision tree. 2.2.1 Architecture Figure 1 shows the basic architecture of our parallel model. There are three roles in the parallel model. There are Worker, WorkerStub and Coordinator, respectively. Instances of Worker perform actual concolic testing simultaneously on different computing units. Each of instances of Worker is managed by a corresponding instance of WorkerStub. The WorkerStub is the essential part of our parallel algorithm for the reduction of global synchronization. Instead of a centralized task scheduler, WorkerStub holds a runtime deterministic task scheduler, which will be explained later. Specifically, the WorkerStub takes charge of (1) starting one instance of Worker each iteration, (2) assigning path computing tasks to the Worker, (3) collecting feedback from the Worker, (4) exchanging computing tasks with other instances of WorkerStub, (5) managing a partial path decision tree and reporting the path decision tree it manages to the Coordinator. The algorithm of Worker is shown in Algorithm 1. A Worker takes \((P, I, \text{choice}, \text{trace})\) assigned by WorkerStub as inputs where \(P\) is the target program to be tested, \(I\) is the sequence of input variables, \(\text{choice}\) is the truth-value assignments for the expected path prefix and \(\text{trace}\) is the program trace which relates to the expected path prefix. After solving constraints of \(\text{choice}\) and Worker \((P, I, choice, path, trace)\) Inputs: - \(P\) is the target program to be tested. - \(I\) is the sequence of input variables. - \(choice\) is the truth-value assignments for the expected path prefix. - \(path\) is one path which relates to the prefix \(c\) from the entire decision tree. - \(trace\) is the program trace which relates to the \(path\). Returns: - \((feasible, p, t, S_p)\) where \(p\) is the result path of \(P\), \(t\) is the result trace and \(S_t\) is the set of all prefix paths related to \(p\). - \(infeasible\) when no solution satisfies the \(choice\) on \(path\). Let \(M\) be the constraint solution values corresponding to the input \(I\) \(M := \langle \rangle\) if \(path \neq nil\) then let \(c_1, c_2, ..., c_n\) be all non-leaf nodes of \(path\) and \(C\) be the final whole sequence of constraints for \(i = 1\) to \(n\) do \(C := C \downarrow \text{GetRealConstraint}(c_i, \text{trace}, I)\) end for \(M := \text{SolveConstraints}(C, choice)\) else \(M := \text{GenerateRandomInput}(I)\) end if if \(M = \langle \rangle\) then return \(infeasible\) end if \((t, p, s) := \text{ConcreteAndSymbolicExecution}(P, I, M)\) if \(s = \text{infeasible}\) then return \(infeasible\) end if \(S_p := \phi\) \(c := \text{GetBranchChoice}(p)\) while \(c \neq 0\) do let \(i_1, i_2, ..., i_n\) be the branch choices in \(c\) \(expected := \text{FlipLastChoice}(c)\) \(S_p := S_p \cup \{expected\}\) \(c := c - i_n\) end while return \((feasible, p, t, S_p)\) Algorithm 1. The algorithm on the worker executing the target program with the solving result, the Worker returns (feasible, $p$, $t$, $S_p$) for a feasible path or infeasible for an infeasible path according to the current performance. If it is a feasible path, the complete executed path $p$ (obviously the prefix of $p$ is choice) and the corresponding program trace $t$ will be sent back to WorkerStub in order to build a partial path decision tree. If it is an infeasible path, the original expected path prefix will be marked as infeasible. Besides, the $p$-related path prefix set $S_p$ is also returned to WorkerStub to create new tasks. The set $S_p$ is computed by negating every constraint assignment (around the call to \texttt{FlipLastChoice} in Algorithm 1) on the path $p$. When one WorkerStub has been started up, it begins to maintain an individual task list and a partial decision tree which will be built from the complete tasks in the individual task list. If the started WorkerStub is the first instance and the task list is empty, it will start an instance of Worker with empty inputs in order to get the first path data from the target program. Otherwise, the WorkerStub will wait for some tasks sent from other WorkerStubs to the individual task list and then run a series of Worker iteratively to compute tasks. When an instance of WorkerStub has received a set $S_p$ from its worker, it firstly uses the local deterministic task scheduler to decide for a specific path prefix in $S_p$ which WorkerStub should receive it as an individual task. After the sorting, every item in $S_p$ with related program trace will be sent as individual task to corresponding WorkerStub told by the scheduler. The Coordinator maintains a global view of the path decision tree by periodically collecting and merging partial trees from all instances of WorkerStub. It initially starts several instances of WorkerStub (the exact number of instances is decided by the number of processors installed on the target computer). It terminates the whole testing process when the global path decision tree is full. To the generalized view of the parallel model (see Figure 2), the Workers with WorkerStubs are the parallelized units. The Worker only communicates with its own WorkerStub by which it receives computing tasks and sends results. The Coordinator mainly controls the termination. The key of low-cost synchronization is the Deterministic Task Scheduler that makes the task scheduling free from a global serialized task list. The shared tasks and tree are divided into a set of disjoint areas by the deterministic task scheduler, which will be explained in the following section. <table> <thead> <tr> <th>Parallel Concolic Testing</th> </tr> </thead> <tbody> <tr> <td>Coordinator</td> </tr> <tr> <td>WorkerStub</td> </tr> <tr> <td>Worker</td> </tr> </tbody> </table> Fig. 2. Processes ### 2.2.2 Deterministic Task Scheduler A naive task scheduler (Figure 3a) usually maintains a centralized task list to schedule tasks. When there are more than one free workers waiting tasks, the scheduler has to serialize the assignment of tasks to avoid data races among workers (e.g. avoiding two workers get the same task). The serialization means the workers have to wait in a queue, which leads to a waste of computing time. The underlying reason of the imperative serialization is the nondeterminism in the scheduling plan. It simply distributes tasks to any free worker. A task in the list can be accomplished by any workers. In our parallel model introduced previously, dynamically generated paths can be scheduled among different computing units by a uniform deterministic task scheduler. The deterministic task scheduler (Figure 3b) is designed to overcome disadvantages of the nondeterministic scheduler. As we can see in Figure 3b, the centralized structure along with its connections with the workers is eliminated. The workers connect directly with each other to exchange path record only when necessary (determined by the deterministic scheduler on each worker). The effect of the deterministic scheduler is that for each task generated by the testing process, the scheduler tells which worker should compute the task by a universal independent algorithm instead of randomly allowing some free worker to compute the task. Thus, the deterministic scheduler can be placed in each computing unit instead of a global one, which makes the serialized task list be separated to each computing unit. Finally, the global synchronization is eliminated. Several observations should be kept to implement the deterministic task scheduler. The global path decision tree of a program is dynamically combined through concolic testing iterations. Each Worker iteration consumes only one task that consists of a path prefix which is corresponded to program history traces, and generates only one path from the prefix despite the path is feasible or infeasible. Each path along with its prefix on the tree is computed independently from others. All paths can be projected to a discrete space, which means they can be divided into partitions. By dividing and projecting paths and prefixes on the tree into the discrete space, the whole computation of the target program is naturally classified to several disjoint regions on the space. Every working unit takes charge of one region so that all working units have the knowledge of a specific task which working unit should take charge of. Formally, the deterministic task scheduler is implemented by a function \((\mathcal{F} \circ \mathcal{H})(p)\) where the sub function \(\mathcal{H}\) takes its domain as all possible paths on the binary decision tree, and its image is a bounded range of positive integer to present the abstract path space. Function \(\mathcal{F}\) maps the image of \(\mathcal{H}\) to integers in the range \([0, MAX\_UNIT\_CNT]\) to present regions on the space. The function must ensure that on different computing units it should get the same result for the same path \( p \) (namely deterministic). The sub function \( \mathcal{H} \) can have many types of implementations. Different implementations have different effects to the parallel concolic testing. A good design of the schedule function is a hard problem. One reason for its difficulty is that the condition of the path tree can be varied. Different programs, even different units in the same program, have different kinds of path distribution. This means there can hardly be a universal scheduler which performs the same well on every testing unit. Another reason for the difficulty is the inability in the prediction of the running time of every worker iteration. Even if we find a scheduler which can balance the number of paths on workers, the total time cost on different workers may be still unbalanced. Although we have not got an excellent scheduler function, we come up with some standards which a good scheduler function should observed. A good scheduler function should divide the space as uniformly as possible, each computing unit having almost the same number of tasks. Moreover, it is even better if the scheduler could balance the overall computing time on each computing unit. On the other hand, a poor scheduler function fails to equitably distribute the number of tasks and computing time on each computing unit. This could lead to that a certain number of computing units are extremely busy, while the others are just wasting time on waiting. For instance, let \( p \) be the length of a path to be scheduled, and \( \mathcal{H} \) be the hash policy used in the scheduler. Then, we have the following definitions for the scheduler function \( (\mathcal{F} \circ \mathcal{H})(p) \): \[ \mathcal{H}(p) = p \mod \text{MAX\_UNIT\_CNT}, \quad \mathcal{F}(p) = \mathcal{H}(p). \] This scheduler assigns a task to the working unit identified by the length of path prefix modulo the number of computing units. If the number of computing units is larger than the length of the longest path in the target program, some computing units will stay in starving state for a long time. Thus, the design of a fair function \( \mathcal{H} \) is important to improve the whole testing performance. ### 2.2.3 Motivating Example Revisited The motivating example in Section 1 can be efficiently processed. We tested the example on an Intel Core i7 platform which is equipped with eight logical processors. Compared to the result of the sequential testing (84.6 seconds), the parallel testing cost only 21.4 seconds to complete the searching of about 3400 feasible paths. During the parallel testing in this case, eight processes are started at the same time to explore the path space. The CPU workload is fully utilized for solving constraints of long paths and exploring more paths from existing paths. The percentage of the performance improved is determined. not only by the increasing of computing units but also by the complexity of the target program under test. In the extreme case by the example we present here, the performance improvement is huge. It shows that the parallel approach gains better advantage for large program involving longer paths and more complex path conditions. 3 Evaluation We have implemented the parallel algorithm and integrated it into the unit testing toolkit CAUT [1]. In this section, some details on the experiments will be given. Also, some typical results will be shown, and the explanations to them will be given. Our experiments are conducted on 2.66GHz Intel Core i7 CPU running Windows 7 with 6GB RAM, which provides eight logical processors. 3.1 Experiment preparation The experimental examples mainly come from SIR [2], including bash, flex, grep, make, printtoken2 and schedule. Other examples are algebra linear [3] and micro OpenGL core (c00nGL) [4]. We selected parts of those programs but not whole programs to ensure that the testing time of each experiment was less than 15 minutes in the single core mode. For each example, we tested every separated function one by one in the target program and then summed the data of every tested unit (such as feasible paths) up as the result data. The calling dependencies in the unit under test were replaced by mock functions. Besides, environment inputs shall be transformed to arguments of the testing unit, as they may disturb program paths. The scheduler function $H$ we adopted is a general hash function with the range $[0, 2^{32})$, while the function $F$ divides the range of the $H$ equitable to every computing unit. The functions are defined as follows: ```c unsigned int H(p) { unsigned int hash = 0; for each node value n in p hash = (hash << 5) + hash + n; return hash; } unsigned int F(p) { unsigned int worker = H(p) mod MAX_UNIT_CNT; return worker; } ``` The above scheduler we adopted is based on our experimental tries. As it is discussed in the last section, it is selected according to the observations, although this instance of scheduler is not guaranteed to be the best solution. ### 3.2 Results The experimental result is shown in Table 1, where the fourth column shows all the feasible paths of each example we tested. The last three columns show the time cost ratio between sequential (Single Core) and parallel (Dual Core) mode. It is clearly demonstrated that time cost of the parallel is less than the one of sequential mode with the given hardware resources given. Because of the scheduler function behaves differently on different examples the accelerated percentage (the last column) floats in a wide range (the lowest for `grep` costs 105.44% while the highest for `printtoken2` costs 133.43%). In a good one, e.g. `printtoken2`, the scheduler function may assign the generated paths uniformly on the two processers. The statistic data also supports our reasoning. In `printtoken2`, 315 cross-cpu tasks were sent to the one processor while the other processor received 304 cross-cpu tasks. Taking `grep` as another example, the performance of our parallel algorithm behaves not well enough. The experimental data shows that the paths allocation is not average for two processors: one is assigned to 782 cross-cpu tasks, while the other even only has 208 cross-cpu tasks, and computing tasks from 14 of 19 functions completely cannot be parallelized. To explain this reason, we analyzed the source code of `grep`. We found that many paths in `grep` program are short, which leads to the bad performance of our adopted scheduler function, because it may map the short paths to the same processor. The other examples which behaves not well in the parallel algorithm have the same reason. The other experiment shows the trend of performance boosting by increasing processors for five examples we listed previously. Figure 4 shows the result where the x axis presents the number of total processors and the y axis is the ratio between the time cost of parallel testing and the cost of sequential one. In Figure 4, we can easily see that the performance increases with <table> <thead> <tr> <th>Program</th> <th>Units</th> <th>Lines</th> <th>Feasible Paths</th> <th>Single-Core(ms)</th> <th>Dual-Core(ms)</th> <th>Single:Dual</th> </tr> </thead> <tbody> <tr> <td>algebra linear</td> <td>27</td> <td>3240</td> <td>1657</td> <td>725398</td> <td>553444</td> <td>130.74%</td> </tr> <tr> <td>bash</td> <td>35</td> <td>1170</td> <td>2002</td> <td>336139</td> <td>257466</td> <td>130.56%</td> </tr> <tr> <td>c00nGL</td> <td>26</td> <td>1282</td> <td>226</td> <td>76242</td> <td>60864</td> <td>125.27%</td> </tr> <tr> <td>flex</td> <td>25</td> <td>538</td> <td>3150</td> <td>758809</td> <td>587220</td> <td>129.22%</td> </tr> <tr> <td>grep</td> <td>19</td> <td>1215</td> <td>505</td> <td>101050</td> <td>95835</td> <td>105.44%</td> </tr> <tr> <td>make</td> <td>26</td> <td>786</td> <td>1769</td> <td>136716</td> <td>128294</td> <td>106.56%</td> </tr> <tr> <td>printtoken2</td> <td>13</td> <td>359</td> <td>47</td> <td>176574</td> <td>132333</td> <td>133.43%</td> </tr> <tr> <td>schedule</td> <td>16</td> <td>147</td> <td>100</td> <td>6140</td> <td>5659</td> <td>108.50%</td> </tr> </tbody> </table> Table 1 Experimental Results the adding of processor numbers. The acceleration ratio can reach almost 30% with eight processors for those examples, which means that our parallel algorithm is very effective and can save nearly 70% time cost compared to the sequential concolic testing. Furthermore, the five curves also tell that the threshold of performance boosting may be arrived. It is obvious that some of curves drop more rapidly than others, but the trend of all of them tends to be flat. The reason is that in our model all the path schedulers are local and run in parallel, and they will send the paths to other processors to be handled. With the increasing number of processors, the communication cost (even it is asynchronous) will increase as well. 4 Conclusion This paper gives a different perspective on the performance improvement of concolic testing technique by introducing a parallel algorithm. This kind of method is able to fully utilize resources of hardware, so the performance can be gained by increasing hardware processors or computation nodes in the distributed system. The contribution of our work is to apply parallel capability to traditional concolic testing with a low-synchronization framework. The parallel algorithm has been implemented and integrated into CAUT [1]. The usability of the parallel approach is further confirmed by the application of CAUT. Comparing with other scalable test case generation techniques, the parallel model provides a practical approach for them. 5 Acknowledgement Xiao Yu is partially supported by 973 Project No.2005CB321904. Sun Shuai is partially supported by NFSC No. 90818024. Geguang Pu is partially supported by Fundamental Research Funds for the Central Universities and References
{"Source-Url": "http://www.columbia.edu/~ss4088/publication/ttss10.pdf", "len_cl100k_base": 6501, "olmocr-version": "0.1.49", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 35975, "total-output-tokens": 8604, "length": "2e12", "weborganizer": {"__label__adult": 0.0003151893615722656, "__label__art_design": 0.00025343894958496094, "__label__crime_law": 0.0002961158752441406, "__label__education_jobs": 0.0004498958587646485, "__label__entertainment": 5.1021575927734375e-05, "__label__fashion_beauty": 0.00013577938079833984, "__label__finance_business": 0.0001398324966430664, "__label__food_dining": 0.00030875205993652344, "__label__games": 0.0005388259887695312, "__label__hardware": 0.0009365081787109376, "__label__health": 0.0004220008850097656, "__label__history": 0.0001742839813232422, "__label__home_hobbies": 7.325410842895508e-05, "__label__industrial": 0.000301361083984375, "__label__literature": 0.00020265579223632812, "__label__politics": 0.00023174285888671875, "__label__religion": 0.0003924369812011719, "__label__science_tech": 0.01305389404296875, "__label__social_life": 7.486343383789062e-05, "__label__software": 0.004673004150390625, "__label__software_dev": 0.97607421875, "__label__sports_fitness": 0.00028324127197265625, "__label__transportation": 0.0003955364227294922, "__label__travel": 0.00017833709716796875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33165, 0.03258]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33165, 0.60372]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33165, 0.88754]], "google_gemma-3-12b-it_contains_pii": [[0, 2108, false], [2108, 4332, null], [4332, 6205, null], [6205, 8690, null], [8690, 11606, null], [11606, 13170, null], [13170, 14918, null], [14918, 16773, null], [16773, 19024, null], [19024, 21991, null], [21991, 24002, null], [24002, 27155, null], [27155, 28873, null], [28873, 32285, null], [32285, 33165, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2108, true], [2108, 4332, null], [4332, 6205, null], [6205, 8690, null], [8690, 11606, null], [11606, 13170, null], [13170, 14918, null], [14918, 16773, null], [16773, 19024, null], [19024, 21991, null], [21991, 24002, null], [24002, 27155, null], [27155, 28873, null], [28873, 32285, null], [32285, 33165, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33165, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33165, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33165, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33165, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33165, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33165, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33165, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33165, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33165, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33165, null]], "pdf_page_numbers": [[0, 2108, 1], [2108, 4332, 2], [4332, 6205, 3], [6205, 8690, 4], [8690, 11606, 5], [11606, 13170, 6], [13170, 14918, 7], [14918, 16773, 8], [16773, 19024, 9], [19024, 21991, 10], [21991, 24002, 11], [24002, 27155, 12], [27155, 28873, 13], [28873, 32285, 14], [32285, 33165, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33165, 0.07426]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
a23e6577b18ceb3807cdef80ccde89ca85116dec
Chapter 4 Breadth First Search, Dijkstra’s Algorithm for Shortest Paths OLD CS 473: Fundamental Algorithms, Spring 2015 January 29, 2015 4.1 Breadth First Search 4.1.0.1 Breadth First Search (BFS) Overview (A) BFS is obtained from BasicSearch by processing edges using a queue data structure. (B) It processes the vertices in the graph in the order of their shortest distance from the vertex s (the start vertex). As such... (A) DFS good for exploring graph structure (B) BFS good for exploring distances 4.1.0.2 Queue Data Structure Queues queue: list of elements which supports the operations: (A) enqueue: Adds an element to the end of the list (B) dequeue: Removes an element from the front of the list Elements are extracted in first-in first-out (FIFO) order, i.e., elements are picked in the order in which they were inserted. 4.1.0.3 BFS Algorithm Given (undirected or directed) graph $G = (V, E)$ and node $s \in V$ \[ \text{BFS}(s) \] Mark all vertices as unvisited Initialize search tree $T$ to be empty Mark vertex $s$ as visited set $Q$ to be the empty queue \[\text{enq}(s)\] while $Q$ is nonempty do \[u = \text{deq}(Q)\] for each vertex $v \in \text{Adj}(u)$ if $v$ is not visited then add edge $(u, v)$ to $T$ Mark $v$ as visited and \[\text{enq}(v)\] Proposition 4.1.1. \text{BFS}(s) runs in $O(n + m)$ time. 4.1.0.4 BFS: An Example in Undirected Graphs BFS tree is the set of black edges. BFS <table> <thead> <tr> <th></th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td></td> <td>4. [4,5,7,8]</td> <td>5. [5,7,8]</td> <td>6. [7,8,6]</td> </tr> </tbody> </table> 4.1.0.5 BFS: An Example in Directed Graphs 4.1.0.6 BFS with Distance **BFS**($s$) Mark all vertices as unvisited and for each $v$ set dist($v$) = \infty Initialize search tree $T$ to be empty Mark vertex $s$ as visited and set dist($s$) = 0 set $Q$ to be the empty queue enq($s$) while $Q$ is nonempty do $u = \text{deq}(Q)$ for each vertex $v \in \text{Adj}(u)$ do if $v$ is not visited do add edge $(u, v)$ to $T$ Mark $v$ as visited, enq($v$) and set dist($v$) = dist($u$) + 1 4.1.0.7 Properties of BFS: Undirected Graphs **Proposition 4.1.2.** The following properties hold upon termination of **BFS**($s$): (A) $\forall (\text{BFS tree comp.}) =$ set vertices in connected component $s$. (B) If dist($u$) < dist($v$) then $u$ is visited before $v$. (C) $\forall u \in V$, dist($u$) = the length of shortest path from $s$ to $u$. (D) If $u, v \in$ connected component of $s$, and $e = uv$ is an edge of $G$, then either $e \in$ BFS tree, or $|\text{dist}(u) - \text{dist}(v)| \leq 1$. **Proof**: Exercise. 4.1.0.8 Properties of BFS: Directed Graphs **Proposition 4.1.3.** The following properties hold upon termination of $T \leftarrow \text{BFS}(s)$: (A) For search tree $T$. $\forall (T) =$ set of vertices reachable from $s$ (B) If dist($u$) < dist($v$) then $u$ is visited before $v$ (C) $\forall u \in V(T)$: dist($u$) = length of shortest path from $s$ to $u$ (D) If $u$ is reachable from $s$, $e = (u \rightarrow v) \in E(G)$. Then either (i) $e$ is an edge in the search tree, or (ii) dist($v$) − dist($u$) ≤ 1. *Not necessarily the case that* dist($u$) − dist($v$) ≤ 1. Proof: Exercise. 4.1.0.9 BFS with Layers **BFSLayers\(s\):** Mark all vertices as unvisited and initialize \(T\) to be empty Mark \(s\) as visited and set \(L_0 = \{s\}\) \(i = 0\) while \(L_i\) is not empty do initialize \(L_{i+1}\) to be an empty list for each \(u\) in \(L_i\) do for each edge \((u, v) \in \text{Adj}(u)\) do if \(v\) is not visited mark \(v\) as visited add \((u, v)\) to tree \(T\) add \(v\) to \(L_{i+1}\) \(i = i + 1\) Running time: \(O(n + m)\) 4.1.0.10 Example ![Diagram showing BFS with Layers](image) 4.1.0.11 BFS with Layers: Properties **Proposition 4.1.4.** The following properties hold on termination of \(\text{BFSLayers}(s)\). (A) \(\text{BFSLayers}(s)\) outputs a BFS tree (B) \(L_i\) is the set of vertices at distance exactly \(i\) from \(s\) (C) If \(G\) is undirected, each edge \(e = uv\) is one of three types: (A) tree edge between two consecutive layers (B) non-tree **forward/backward** edge between two consecutive layers (C) non-tree **cross-edge** with both \(u, v\) in same layer (D) \(\implies\) Every edge in the graph is either between two vertices that are either (i) in the same layer, or (ii) in two consecutive layers. 4.1.0.12 Example: Tree/cross/forward (backward) edges 4.1.1 BFS with Layers: Properties 4.1.1.1 For directed graphs **Proposition 4.1.5.** The following properties hold on termination of $\text{BFS}^{\text{Layers}}(s)$, if $G$ is directed. For each edge $e = (u \rightarrow v)$ is one of four types: (A) a **tree** edge between consecutive layers, $u \in L_i, v \in L_{i+1}$ for some $i \geq 0$ (B) a non-tree **forward** edge between consecutive layers (C) a non-tree **backward** edge (D) a **cross-edge** with both $u, v$ in same layer 4.2 Bipartite Graphs and an application of BFS 4.2.0.2 Bipartite Graphs **Definition 4.2.1 (Bipartite Graph).** Undirected graph $G = (V, E)$ is a **bipartite graph** if $V$ can be partitioned into $X$ and $Y$ s.t. all edges in $E$ are between $X$ and $Y$. 4.2.0.3 Bipartite Graph Characterization Question When is a graph bipartite? **Proposition 4.2.2.** Every tree is a bipartite graph. *Proof:* Root tree $T$ at some node $r$. Let $L_i$ be all nodes at level $i$, that is, $L_i$ is all nodes at distance $i$ from root $r$. Now define $X$ to be all nodes at even levels and $Y$ to be all nodes at odd level. Only edges in $T$ are between levels. **Proposition 4.2.3.** An odd length cycle is not bipartite. 4.2.0.4 Odd Cycles are not Bipartite **Proposition 4.2.4.** An odd length cycle is not bipartite. *Proof:* Let $C = u_1, u_2, \ldots, u_{2k+1}, u_1$ be an odd cycle. Suppose $C$ is a bipartite graph and let $X, Y$ be the partition. Without loss of generality $u_1 \in X$. Implies $u_2 \in Y$. Implies $u_3 \in X$. Inductively, $u_i \in X$ if $i$ is odd $u_i \in Y$ if $i$ is even. But $\{u_1, u_{2k+1}\}$ is an edge and both belong to $X$. 4.2.0.5 Subgraphs **Definition 4.2.5.** Given a graph $G = (V, E)$ a **subgraph** of $G$ is another graph $H = (V', E')$ where $V' \subseteq V$ and $E' \subseteq E$. **Proposition 4.2.6.** If an undirected $G$ is bipartite then any subgraph $H$ of $G$ is also bipartite. **Proposition 4.2.7.** An undirected graph $G$ is not bipartite if $G$ has an odd cycle $C$ as a subgraph. *Proof:* If $G$ is bipartite then since $C$ is a subgraph, $C$ is also bipartite (by above proposition). However, $C$ is not bipartite! 4.2.0.6 Bipartite Graph Characterization **Theorem 4.2.8.** An undirected graph $G$ is bipartite $\iff$ it has no odd length cycle as subgraph. *Proof:* **Only If:** $G$ has an odd cycle implies $G$ is not bipartite. **If:** $G$ has no odd length cycle. Assume without loss of generality that $G$ is connected. (A) Pick $u$ arbitrarily and do $\text{BFS}(u)$ (B) $X = \bigcup_{i \text{ is even}} L_i$ and $Y = \bigcup_{i \text{ is odd}} L_i$ (C) **Claim:** $X$ and $Y$ is a valid partition if $G$ has no odd length cycle. 4.2.0.7 Proof of Claim Claim 4.2.9. In BFS\((u)\) if \(a, b \in L_i\) and \(ab \in E(G)\) then there is an odd length cycle containing \(ab\). Proof: Let \(v\) be least common ancestor of \(a, b\) in BFS tree \(T\). \(v\) is in some level \(j < i\) (could be \(u\) itself). Path from \(v \leadsto a\) in \(T\) is of length \(j - i\). Path from \(v \leadsto b\) in \(T\) is of length \(j - i\). These two paths plus \((a, b)\) forms an odd cycle of length \(2(j - i) + 1\). 4.2.0.8 Proof of Claim: Figure 4.2.0.9 Another tidbit Corollary 4.2.10. There is an \(O(n+m)\) time algorithm to check if \(G\) is bipartite and output an odd cycle if it is not. 4.3 Shortest Paths and Dijkstra’s Algorithm 4.3.0.10 Shortest Path Problems Input A (undirected or directed) graph \(G = (V, E)\) with edge lengths (or costs). For edge \(e = (u \rightarrow v)\), \(\ell(e) = \ell(u \rightarrow v)\) is its length. (A) Given nodes \(s, t\) find shortest path from \(s\) to \(t\). (B) Given node \(s\) find shortest path from \(s\) to all other nodes. (C) Find shortest paths for all pairs of nodes. Many applications! 4.3.1 Single-Source Shortest Paths: 4.3.1.1 Non-Negative Edge Lengths Single-Source Shortest Path Problems (A) Input: A (undirected or directed) graph \(G = (V, E)\) with non-negative edge lengths. For edge \(e = (u \rightarrow v)\), \(\ell(e) = \ell(u \rightarrow v)\) is its length. (B) Given nodes \(s, t\) find shortest path from \(s\) to \(t\). (C) Given node \(s\) find shortest path from \(s\) to all other nodes. (A) Restrict attention to directed graphs (B) Undirected graph problem can be reduced to directed graph problem - how? (A) Given undirected graph \(G\), create a new directed graph \(G'\) by replacing each edge \(\{u, v\}\) in \(G\) by \((u \rightarrow v)\) and \((v, u)\) in \(G'\). (B) set \(\ell(u \rightarrow v) = \ell(v, u) = \ell(\{u, v\})\) (C) Exercise: show reduction works 4.3.1.2 Single-Source Shortest Paths via BFS (A) **Special case:** All edge lengths are 1. (A) Run **BFS**\((s)\) to get shortest path distances from \(s\) to all other nodes. (B) **O**\((m + n)\) time algorithm. (B) **Special case:** Suppose \(\ell(e)\) is an integer for all \(e\)? Can we use **BFS**? Reduce to unit edge-length problem by placing \(\ell(e) - 1\) dummy nodes on \(e\). (C) Let \(L = \max_e \ell(e)\). New graph has \(O(mL)\) edges and \(O(mL + n)\) nodes. **BFS** takes \(O(mL + n)\) time. Not efficient if \(L\) is large. 4.3.1.3 Towards an algorithm Why does **BFS** work? **BFS**\((s)\) explores nodes in increasing distance from \(s\) **Lemma 4.3.1.** Let \(G\) be a directed graph with non-negative edge lengths. Let \(\text{dist}(s, v)\) denote the shortest path length from \(s\) to \(v\). If \(s = v_0 \to v_1 \to v_2 \to \ldots \to v_k\) is a shortest path from \(s\) to \(v_k\) then for \(1 \leq i < k\): (A) \(s = v_0 \to v_1 \to v_2 \to \ldots \to v_i\) is a shortest path from \(s\) to \(v_i\) (B) \(\text{dist}(s, v_i) \leq \text{dist}(s, v_k)\). **Proof:** Suppose not. Then for some \(i < k\) there is a path \(P'\) from \(s\) to \(v_i\) of length strictly less than that of \(s = v_0 \to v_1 \to \ldots \to v_i\). Then \(P'\) concatenated with \(v_i \to v_{i+1} \ldots \to v_k\) contains a strictly shorter path to \(v_k\) than \(s = v_0 \to v_1 \ldots \to v_k\). \(\blacksquare\) 4.3.1.4 A proof by picture Shortest path from $v_0$ to $v_6$ A shorter path from $v_0$ to $v_6$. A contradiction. 4.3.1.5 A Basic Strategy Explore vertices in increasing order of distance from $s$: (For simplicity assume that nodes are at different distances from $s$ and that no edge has zero length) Initialize for each node \( v \), \( \text{dist}(s,v) = \infty \) Initialize \( S = \emptyset \), for \( i = 1 \) to \( |V| \) do (* Invariant: \( S \) contains the \( i - 1 \) closest nodes to \( s \) *) Among nodes in \( V \setminus S \), find the node \( v \) that is the \( i \)th closest to \( s \) Update \( \text{dist}(s,v) \) \( S = S \cup \{v\} \) How can we implement the step in the for loop? 4.3.1.6 Finding the \( i \)th closest node (A) \( S \) contains the \( i - 1 \) closest nodes to \( s \) (B) Want to find the \( i \)th closest node from \( V - S \). What do we know about the \( i \)th closest node? Claim 4.3.2. Let \( P \) be a shortest path from \( s \) to \( v \) where \( v \) is the \( i \)th closest node. Then, all intermediate nodes in \( P \) belong to \( S \). Proof: If \( P \) had an intermediate node \( u \) not in \( S \) then \( u \) will be closer to \( s \) than \( v \). Implies \( v \) is not the \( i \)th closest node to \( s \) - recall that \( S \) already has the \( i - 1 \) closest nodes. 4.3.2 Finding the \( i \)th closest node repeatedly 4.3.2.1 An example ![Diagrams showing the process of finding the \( i \)th closest node repeatedly.](image-url) 4.3.2.2 Finding the $i$th closest node Corollary 4.3.3. The $i$th closest node is adjacent to $S$. 4.3.2.3 Finding the $i$th closest node (A) $S$ contains the $i - 1$ closest nodes to $s$ (B) Want to find the $i$th closest node from $V - S$. (C) For each $u \in V \setminus S$ let $P(s, u, S)$ be a shortest path from $s$ to $u$ using only nodes in $S$ as intermediate vertices. (D) Let $d'(s, u)$ be the length of $P(s, u, S)$ (E) Observations: for each $u \in V - S$, (A) $dist(s, u) \leq d'(s, u)$ since we are constraining the paths (B) $d'(s, u) = \min_{a \in S}(dist(s, a) + \ell(a, u))$ - Why? (F) **Lemma 4.3.4.** If $v$ is the $i$th closest node to $s$, then $d'(s, v) = dist(s, v)$. 4.3.2.4 Finding the $i$th closest node **Lemma 4.3.5.** Given: (A) $S$: Set of $i - 1$ closest nodes to $s$. (B) $d'(s, u) = \min_{x \in S}(dist(s, x) + \ell(x, u))$ If $v$ is an $i$th closest node to $s$, then $d'(s, v) = dist(s, v)$. **Proof:** Let $v$ be the $i$th closest node to $s$. Then there is a shortest path $P$ from $s$ to $v$ that contains only nodes in $S$ as intermediate nodes (see previous claim). Therefore $d'(s, v) = dist(s, v)$. 4.3.2.5 Finding the $i$th closest node **Lemma 4.3.6.** If $v$ is an $i$th closest node to $s$, then $d'(s, v) = dist(s, v)$. **Corollary 4.3.7.** The $i$th closest node to $s$ is the node $v \in V - S$ such that $d'(s, v) = \min_{u \in V - S} d'(s, u)$. **Proof:** For every node $u \in V - S$, $dist(s, u) \leq d'(s, u)$ and for the $i$th closest node $v$, $dist(s, v) = d'(s, v)$. Moreover, $dist(s, u) \geq dist(s, v)$ for each $u \in V - S$. 4.3.2.6 Candidate algorithm for shortest path Initialize for each node \( v \): \( \text{dist}(s,v) = \infty \) Initialize \( S = \emptyset \), \( d'(s,s) = 0 \) for \( i = 1 \) to \(|V|\) do (* Invariant: \( S \) contains the \( i-1 \) closest nodes to \( s \) *) (* Invariant: \( d'(s,u) \) is shortest path distance from \( u \) to \( s \) using only \( S \) as intermediate nodes*) Let \( v \) be such that \( d'(s,v) = \min_{u \in V - S} d'(s,u) \) \( \text{dist}(s,v) = d'(s,v) \) \( S = S \cup \{v\} \) for each node \( u \) in \( V \setminus S \) do \( d'(s,u) = \min_{a \in S} \left( \text{dist}(s,a) + \ell(a,u) \right) \) Correctness: By induction on \( i \) using previous lemmas. Running time: \( O(n \cdot (n + m)) \) time. (A) \( n \) outer iterations. In each iteration, \( d'(s,u) \) for each \( u \) by scanning all edges out of nodes in \( S \); \( O(m + n) \) time/iteration. 4.3.2.7 Example 4.3.2.8 Improved Algorithm (A) Main work is to compute the \( d'(s,u) \) values in each iteration (B) \( d'(s,u) \) changes from iteration \( i \) to \( i + 1 \) only because of the node \( v \) that is added to \( S \) in iteration \( i \). Initialze for each node \( v \): \( \text{dist}(s,v) = d'(s,v) = \infty \) Initialize \( S = \emptyset \), \( d'(s,s) = 0 \) for \( i = 1 \) to \(|V|\) do // \( S \) contains the \( i-1 \) closest nodes to \( s \), // and the values of \( d'(s,u) \) are current \( v \) be node realizing \( d'(s,v) = \min_{u \in V - S} d'(s,u) \) \( \text{dist}(s,v) = d'(s,v) \) \( S = S \cup \{v\} \) Update \( d'(s,u) \) for each \( u \) in \( V - S \) as follows: \[ d'(s,u) = \min \left( d'(s,u), \text{dist}(s,v) + \ell(v,u) \right) \] Running time: \( O(m + n^2) \) time. (A) \(n\) outer iterations and in each iteration following steps (B) updating \(d'(s, u)\) after \(v\) added takes \(O(\text{deg}(v))\) time so total work is \(O(m)\) since a node enters \(S\) only once (C) Finding \(v\) from \(d'(s, u)\) values is \(O(n)\) time 4.3.2.9 Dijkstra’s Algorithm (A) eliminate \(d'(s, u)\) and let \(\text{dist}(s, u)\) maintain it (B) update \(\text{dist}\) values after adding \(v\) by scanning edges out of \(v\) ``` Initialize for each node \(v\), \(\text{dist}(s, v) = \infty\) Initialize \(S = \emptyset\), \(\text{dist}(s, s) = 0\) for \(i = 1\) to \(|V|\) do Let \(v\) be such that \(\text{dist}(s, v) = \min_{u \in V} \text{dist}(s, u)\) \(S = S \cup \{v\}\) for each \(u\) in \(\text{Adj}(v)\) do \(\text{dist}(s, u) = \min(\text{dist}(s, u), \text{dist}(s, v) + \ell(v, u))\) ``` Priority Queues to maintain \(\text{dist}\) values for faster running time (A) Using heaps and standard priority queues: \(O((m + n) \log n)\) (B) Using Fibonacci heaps: \(O(m + n \log n)\). 4.3.2.10 Example: Dijkstra algorithm in action 4.3.3 Priority Queues 4.3.3.1 Priority Queues Data structure to store a set \(S\) of \(n\) elements where each element \(v \in S\) has an associated real/integer key \(k(v)\) such that the following operations: (A) \text{makePQ}: create an empty queue. (B) \text{findMin}: find the minimum key in \(S\). (C) \text{extractMin}: Remove \(v \in S\) with smallest key and return it. (D) \text{insert}(v, k(v)): Add new element \(v\) with key \(k(v)\) to \(S\). (E) \text{delete}(v): Remove element \(v\) from \(S\). (F) \text{decreaseKey}(v, k'(v)): decrease key of \(v\) from \(k(v)\) (current key) to \(k'(v)\) (new key). Assumption: \(k'(v) \leq k(v)\). meld: merge two separate priority queues into one. All operations can be performed in $O(\log n)$ time. decreaseKey is implemented via delete and insert. 4.3.3.2 Dijkstra’s Algorithm using Priority Queues ``` Q ← makePQ() insert(Q, (s, 0)) for each node $u \neq s$ do insert(Q, (u, ∞)) S ← ∅ for $i = 1$ to $|V|$ do $(v, \text{dist}(s, v)) = \text{extractMin}(Q)$ $S = S \cup \{v\}$ for each $u$ in Adj(v) do decreaseKey([[)]Q, (u, min(\text{dist}(s, u), \text{dist}(s, v) + \ell(v, u)))]. ``` Priority Queue operations: (A) $O(n)$ insert operations (B) $O(n)$ extractMin operations (C) $O(m)$ decreaseKey operations 4.3.3.3 Implementing Priority Queues via Heaps Using Heaps Store elements in a heap based on the key value (A) All operations can be done in $O(\log n)$ time Dijkstra’s algorithm can be implemented in $O((n + m) \log n)$ time. 4.3.3.4 Priority Queues: Fibonacci Heaps/Relaxed Heaps Fibonacci Heaps (A) extractMin, delete in $O(\log n)$ time. (B) insert in $O(1)$ amortized time. (C) decreaseKey in $O(1)$ amortized time: $\ell$ decreaseKey operations for $\ell \geq n$ take together $O(\ell)$ time (D) Relaxed Heaps: decreaseKey in $O(1)$ worst case time but at the expense of meld (not necessary for Dijkstra’s algorithm) (A) Dijkstra’s algorithm can be implemented in $O(n \log n + m)$ time. If $m = \Omega(n \log n)$, running time is linear in input size. (B) Data structures are complicated to analyze/implement. Recent work has obtained data structures that are easier to analyze and implement, and perform well in practice. Rank-Pairing Heaps (European Symposium on Algorithms, September 2009!) 4.3.3.5 Shortest Path Tree Dijkstra’s algorithm finds the shortest path distances from $s$ to $V$. Question: How do we find the paths themselves? 4.3.3.6 Shortest Path Tree **Lemma 4.3.8.** The edge set \((u, \text{prev}(u))\) is the reverse of a shortest path tree rooted at \(s\). For each \(u\), the reverse of the path from \(u\) to \(s\) in the tree is a shortest path from \(s\) to \(u\). **Proof:** [Proof Sketch.] (A) The edge set \(\{(u, \text{prev}(u)) \mid u \in V\}\) induces a directed in-tree rooted at \(s\) (Why?) (B) Use induction on \(|S|\) to argue that the tree is a shortest path tree for nodes in \(V\). 4.3.3.7 Shortest paths to \(s\) Dijkstra’s algorithm gives shortest paths from \(s\) to all nodes in \(V\). How do we find shortest paths from all of \(V\) to \(s\)? (A) In undirected graphs shortest path from \(s\) to \(u\) is a shortest path from \(u\) to \(s\) so there is no need to distinguish. (B) In directed graphs, use Dijkstra’s algorithm in \(G^{\text{rev}}\).
{"Source-Url": "https://courses.engr.illinois.edu/cs473/sp2015/w/lec/04_notes.pdf", "len_cl100k_base": 6971, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 60083, "total-output-tokens": 8180, "length": "2e12", "weborganizer": {"__label__adult": 0.0006213188171386719, "__label__art_design": 0.00046372413635253906, "__label__crime_law": 0.0007777214050292969, "__label__education_jobs": 0.0025539398193359375, "__label__entertainment": 0.00021457672119140625, "__label__fashion_beauty": 0.00032591819763183594, "__label__finance_business": 0.0003387928009033203, "__label__food_dining": 0.0007581710815429688, "__label__games": 0.0027923583984375, "__label__hardware": 0.0018644332885742188, "__label__health": 0.0019330978393554688, "__label__history": 0.00078582763671875, "__label__home_hobbies": 0.00026035308837890625, "__label__industrial": 0.0008263587951660156, "__label__literature": 0.0006895065307617188, "__label__politics": 0.0004527568817138672, "__label__religion": 0.0009312629699707032, "__label__science_tech": 0.26611328125, "__label__social_life": 0.0001982450485229492, "__label__software": 0.009674072265625, "__label__software_dev": 0.70361328125, "__label__sports_fitness": 0.0011072158813476562, "__label__transportation": 0.0020008087158203125, "__label__travel": 0.0005402565002441406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 19902, 0.03093]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 19902, 0.56183]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 19902, 0.76914]], "google_gemma-3-12b-it_contains_pii": [[0, 845, false], [845, 1596, null], [1596, 3217, null], [3217, 4453, null], [4453, 5261, null], [5261, 7211, null], [7211, 9144, null], [9144, 10578, null], [10578, 10885, null], [10885, 12097, null], [12097, 13703, null], [13703, 15499, null], [15499, 17244, null], [17244, 19042, null], [19042, 19902, null]], "google_gemma-3-12b-it_is_public_document": [[0, 845, true], [845, 1596, null], [1596, 3217, null], [3217, 4453, null], [4453, 5261, null], [5261, 7211, null], [7211, 9144, null], [9144, 10578, null], [10578, 10885, null], [10885, 12097, null], [12097, 13703, null], [13703, 15499, null], [15499, 17244, null], [17244, 19042, null], [19042, 19902, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 19902, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 19902, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 19902, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 19902, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 19902, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 19902, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 19902, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 19902, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 19902, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 19902, null]], "pdf_page_numbers": [[0, 845, 1], [845, 1596, 2], [1596, 3217, 3], [3217, 4453, 4], [4453, 5261, 5], [5261, 7211, 6], [7211, 9144, 7], [9144, 10578, 8], [10578, 10885, 9], [10885, 12097, 10], [12097, 13703, 11], [13703, 15499, 12], [15499, 17244, 13], [17244, 19042, 14], [19042, 19902, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 19902, 0.01231]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
4b4d61e5fb4634ce1651d53863dc96379a69e82e
INTRODUCTION Software testing is an important means of assessing software quality. Program testing is a rapidly maturing area within software engineering that is receiving increasing attention from both computer science theoreticians and practitioners. Its general aim is to affirm the quality of software systems by systematically exercising the software in carefully controlled circumstances. Testing often consumes 40%-50% of development efforts, and it consumes more effort for systems that require higher levels of reliability. Testing is a significant portion of the software engineering process. With the development of fourth generation languages (4GL), which speeds up the implementation process, the proportion of time... devoted to testing is decreasing. As the amount of maintenance and upgrade work for existing systems grows, a significant amount of testing will also be needed to verify systems after changes are made. Class diagrams are widely used to describe the types of objects in a system and their relationships. Class diagrams model class structure and contents using design elements such as classes, packages, and objects. Class diagrams describe three different perspectives when designing a system: conceptual, specification, and implementation. These perspectives become evident as the diagram is created and help solidify the design. The remainder of the paper is organized as follows: Section 2 introduces object-oriented software engineering and presents the various stages of testing. Section 3 discusses the literature survey. Section 4 presents various artifacts related to class testing. Section 5 presents the objectives of this research. Section 6 presents the preliminary class architecture, and section 7 presents the conclusion and future work. Object Oriented Testing An object is an entity composed of data and procedures. The procedures, referred to as methods, implement the operations on the object's data. Each object has a state, an identity, and a behavior. The definition of the type of object is a description of its capabilities. Object-oriented testing focuses on the states of the objects and their interactions. In an object-oriented testing system, classes play important roles, classes are the smallest testable units, and they provide an excellent structuring mechanism. They allow a system to be divided into well-defined units that can then be implemented separately. Second, classes support information-hiding. A class can export a purely procedural interface and the internal structure of the data may be hidden. This allows the structure to be changed without affecting users of the class, thus simplifying maintenance. Third, object-orientation encourages and supports software reuse. This may be achieved either through the simple reuse of a class in a library, or via inheritance, whereby a new class may be created as an extension of an existing one. The behavior of inherited methods can be changed because of methods that are called within methods must be tested per class. Unlike conventional test case design, which is driven by an input-process-output view of software or the algorithmic detail of individual modules, object-oriented testing focuses on designing appropriate sequences of operations to exercise the states of a class. Object-oriented software is developed incrementally with iterative and recursive cycles of planning, analysis, design, implementation, and testing. Testing plays a special role here, because it is performed after each increment. The Major Stages of Research and Development Trends in Object Oriented System Architecture and Testing (Literature survey) Generally, we see three major stages in the research and development of testing techniques, each with a different trend. By trend, we mean how mainstream of research and development activities find the problems to solve and how they solve the problems. As described below, technology evolution involves testing technique technologies. The technique used for selecting test data has progressed from an ad hoc approach, through an implementation-based phase, and is now specification based. The literature survey includes the solution approaches of various research studies that dealt with problems related to testing methods and issues in the design of testing tools for various circumstances. Literature Survey [BBL97] A framework for probabilistic functional testing is proposed in this paper. The authors introduce the formulation of the testing activity, which guarantees a certain level of confidence into the correctness of the system under test. They also explain how one can generate appropriate distributions for data domains, including most common domains, such as intervals of integers, unions, Cartesian products, and inductively defined sets. A tool assisting test-case generation according to this theory is proposed. The method is illustrated on a small formal specification. [Beizer90] This book gives a fairly comprehensive overview of software testing that emphasizes formal models for testing. The author provides a general overview of the testing process and the reasons and goals for testing. In the second chapter of this book, the author classifies the different types of bugs that could arise in program development. The notion of path testing, transaction flow graphs, data-flow testing, domain testing, and logic-based testing are introduced in detail. The author also introduces several attempts to quantify program complexity and a more abstract discussion involving paths, regular expressions and syntax testing. The implementation of software testing based on these strategies is also discussed. [BG01] Testing becomes complicated with features, such as the absence of component source code, that are specific to component-based software. This paper proposes a technique combining both black-box and white-box strategies. A graphical representation of component software, called a component-based software flow graph (CBSFG), which visualizes information gathered from both specification and implementation, is described. It can then be used for test-case identification based on well-known structural techniques. [BIMR97] In this paper the authors use formal architectural descriptions (CHAM) to model the behaviors of interest of the systems. A graph of all the possible behaviors of the system in terms of the interactions between its components is derived and further reduced. A suitable set of reduced graphs highlights the specific architectural properties of the system, and can be used for the generation of integration tests according to a coverage strategy, analogous to the control and data flow graphs in structural testing. [GG75] This paper is the first published paper that attempted to provide a theoretical foundation for testing. The “fundamental theorem of testing” proposed by the authors characterizes the properties of a completely effective test selection strategy. The authors argue that a test selection strategy is completely effective if it is guaranteed to discover any error in a program. As an example, the effectiveness of branch and path testing in discovering errors is compared. The use of a decision table (a mixture of requirements and design-based functional testing) as an alternative method is also proposed. [GH88] In this article, the evolution of software test engineering is traced by examining changes in the testing process model and the level of professionalism over the years. Two phase models, the demonstration and destruction models, and two life cycle models, the evolution and prevention models, are provided to characterize the growth of software testing with time. Based on the models, a prevention-oriented testing technology is introduced and analyzed in detail. [Howden76] The reliability of path testing provides an upper bound for the testing of a subset of a program’s paths, which is always the case in reality. This paper begins by showing the impossibility of constructing a test strategy that is guaranteed to discover all errors in a program. Three commonly occurring classes of errors, computations, domain, and sub case, are characterized. The reliability properties associated with these errors affect how path testing is defined. [Howden80] The usual practice of functional testing is to identify functions that are implemented by a system or program from requirement specifications. In this paper, the necessity of design testing and requirement functions is discussed. The paper indicates how systematic design methods, such as structured design and the Jackson design, can be used to construct functional tests. Structured design can be used to identify the design functions that must be tested in the code, while the Jackson method can be used to identify the types of data that should be used to construct tests for those functions. [Huang75] This paper introduces the basic notions of dynamic testing based on a detailed path analysis in which full knowledge of the contents of the source program being tested is used during the testing process. Instead of the common test criteria in which every statement in the program is executed at least once, the author suggested and demonstrated with an example that a better criterion is to require that every edge in the program diagram be exercised at least once. The process of manipulating a program by inserting probes along each segment in the program is suggested in this paper. [JM94] Many models exist for estimating and predicting the reliability of software systems, most of which consider a software system as a black box and predict the reliability based on the failure data observed during testing. In this paper, a reliability model based on the software structure is proposed. The model uses the number of times a particular module is executed as the main input. A software system is modeled as a graph, and the reliability of a node is assumed to be a function of the number of times it gets executed during testing—the larger the number of times a node gets executed, the higher its reliability. The reliability of the software system is then computed through simulation by using the reliabilities of the individual nodes. [Miller81] This article serves as one of the introductory sections of the book Tutorial: Software Testing and Validation Techniques. A cross section of program testing technology before and around the year 1980 is provided in this book, including the theoretical foundations of testing tools and techniques for static analysis and dynamic analysis effectiveness assessment management and planning and the research and development of software testing and validation. The article briefly summarizes each of the major sections and provides a general overview of the motivation forces, the philosophy and principles of testing, and the relationship between testing and software engineering. [ROT89] This paper proposes one of the earliest approaches focusing on utilizing specifications in selecting test cases. In traditional specification-based functional testing, test cases are selected by hand based on a requirement specification, which means functional testing merely includes heuristic criteria. Structural testing has an advantage in that the applications can be automated and the satisfaction determined. The authors propose approaches to specification-based testing by extending a wide variety of implementation-based testing techniques to formal specification languages, and they demonstrate these approaches for the Anna and Larch specification languages. [RR85] In this paper, a variety of software technologies are reviewed. The technology maturation process by which a piece of technology is created is described: first, an idea is formulated and preliminarily used; it is then developed and extended into a broader solution and finally enhanced to product-quality applications and marketed to the public. The time required for a piece of technology to mature is studied, and the actions that can accelerate the maturation process are addressed. This paper serves as a strong framework for technology maturation study. [RW85] A family of test data selection criteria based on data flow analysis is defined in this paper. The authors contend that data flow criteria are superior to current path selection criteria in that when using the latter strategy, program errors can go undetected. The definition/use graph is introduced and compared with a program graph based on the same program. The interrelationships between these data flow criteria are also discussed. [Shaw90] Software engineering is still in the process of becoming a true engineering discipline. This article studies the model for the evolution of an engineering discipline and applies it to software technology. Five basic steps are suggested for the software profession in creating a true engineering discipline: understanding the nature of expertise, recognizing different ways to obtain information, encouraging routine practice, expecting professional specializations, and improving the coupling between science and commercial practice. The significant shifts in software engineering research since the 1960s are also discussed in this article. [WC80] Domain errors are in the subset of the program input domain and can be caused by incorrect predicates in branching statements or incorrect computations that affect variables in branching statements. In this paper, a set of constraints under which it is possible to reliably detect domain errors is introduced. The paper develops the idea of linearly bounded domains. The practical limitations of the approach are also discussed, the most severe of which is generating and then developing test points for all boundary segments of all domains of all program paths. [Whit00] As a practical tutorial article, this paper answers questions from developers about how bugs escape testing. Undetected bugs come from executing untested code, differences in the order of executing, combinations of untested input values, and untested operating environments. A four-phase approach is described in answering the questions. By carefully modeling the software’s environment, selecting test scenarios, running and evaluating test scenarios, and measuring testing progress, the author offers testers a structure for the problems they want to solve during each phase. [Poston 2005] Here we summarize their work - Integration of all the data across tools and repositories. - Integration of control across the tools. - Integration to provide a single graphical interface for the test tool set. Limitation It emphasizes only integration tools (usability and portability). [Rosenberg 2008] The approach to software metrics for object-oriented programs must be different from the standard metric sets. Some metrics, such as line of code and cyclomatic complexity, have become accepted as standard for traditional functional/procedural programs. However, for object-oriented scenarios, there are many proposed object-oriented metrics in the literature. Limitation This provides only a conceptual framework for measurement. [Agrawal 2007] As per this paper the importance of software measurement is increasing which is leading to the development of new measurement techniques. Limitation a) In this research, object-oriented metrics does not provide any relationship between requirements and testing attributes. b) In this research, object-oriented metrics cannot be evaluated for large data sets. “Software quality is another focus of our research. Metrics fall into two categories: the productivity and the quality. Most of our object oriented metrics are quality related. We wish to achieve good maintainability, reusability, flexibility and portability in the architecture of the software testing tool under construction”. [Anderson 2005] They emphasize that the software industry has performed a significant amount of research on improving software quality using software tools and metrics that will improve the software quality and reduce the overall development time. Good-quality code will also be easier to write, understand, maintain and upgrade [1]. **Limitation** a) In this research, object-oriented metrics do not provide any relationships between requirement testing attributes. b) In this research, object-oriented metrics do not provide full-featured testing tools (only complexity and cohesion measures). c) In this research, object-oriented metrics provides only a conceptual framework for measurement. [Briand 1999] This paper shows that the relationships between most of the existing coupling and cohesion measures for object-oriented (OO) systems and the fault proneness of object-oriented system classes can be studied empirically [6]. **Limitation** Only emphasizes cohesion and coupling metrics. [Bitman 1997] This research defines a key problem in software development: changing software development complexity and the method to reduce complexity. **Encapsulation** Wrapping data and functions into a single unit is known as encapsulation. This restricts the visibility of object states and restricts the observability of intermediate test results. Fault discovery is more difficult in this case. **Inheritance** The mechanism of deriving a new class from an old one is called inheritance. The old class is referred to as the base class, and the new one is called the derived class or the subclass. Inheritance results in invisible dependencies between super/sub-classes. Inheritance results in reduced code redundancy, which results in increased code dependencies. If the function is erroneous in the base class, it will also be inherited in the derived class. A subclass cannot be tested without its super-classes. Abstract classes cannot be tested at all. **Polymorphism** Polymorphism is one of the crucial features of OOP. It simply means that one name represents multiple forms. Because of polymorphism, all possible bindings must be tested. All potential execution paths and potential errors must be tested. Testing begins by evaluating the OOA and OOD models. Object-oriented analysis models can be tested using the collected requirements and use cases. Object-oriented design can be tested by using the class and sequence diagrams. Structured walkthroughs and reviews should be conducted to ensure correctness, completeness and consistency. Object-oriented programming is centered on concepts such as Object, Class, Message, Interfaces, Inheritance, and Polymorphism. Traditional testing techniques can be adopted in object-oriented environments by using the following techniques: - Function-based testing. - Class testing. - Integration testing. - Fault-based testing. - Scenario-based testing. **Function-based Testing** Like conventional (traditional) testing, function-based testing is based on product requirements and specifications. **Class Testing** Class testing is performed on the smallest testable unit in the encapsulated class. As part of a class hierarchy, each operation must be tested because its class hierarchy defines its context of use. New methods, inherited methods and redefined methods within the class must be tested. This testing is performed using the following approaches: - Test each method (and constructor) within a class. - Test the state behavior (attributes) of the class between methods. Class testing is different from conventional testing in that conventional testing focuses on input-process-output, whereas class testing focuses on each method. In addition to testing methods within a class (either white box or black box), Test cases should be designed so that they are explicitly associated with the class and/or method to be tested. The purpose of the test should be clearly stated. Each test case should contain the following: 1. A list of messages and operations that will be exercised as a consequence of the test. 2. A list of exceptions that may occur as the object is tested. 3. A list of external conditions for setup (i.e., changes in the environment external to the software that must exist in order to properly conduct the test). 4. Supplementary information that will aid in understanding or implementing the test. **Some challenge in object-oriented class testing** **Encapsulation** Difficult to obtain a snapshot of a class without building extra methods that display the classes’ state. **Inheritance and polymorphism** - Each new context of use (subclass) requires re-testing because a method may be implemented differently (polymorphism). - Other unaltered methods within the subclass may use the redefined method and need to be tested. **White box tests** Basis path, condition, data flow and loop tests can all apply to individual methods but do not test interactions between methods. **Class-level testing can be classified into the following parts** **Random class testing** Identify methods applicable to a class. Define constraints on their use: - The class must always be initialized first. - Identify a minimum test sequence. - Choose an operation sequence that defines the minimum life history of the class. - Generate a variety of random (but valid) test sequences. - This exercises more complex class instance life histories. **Partitioned-based testing** This approach reduces the number of test cases required to test a class in much the same way as equivalence partitioning for conventional software for the following types of partitioned-based testing: **State-based partitioning** Tests are designed such that operations that cause state changes are tested separately from those that do not cause any changes in the state. **Attribute-based partitioning** For each class attribute, operations are classified according to those that use the attribute, modify the attribute and do not use or modify the attribute. **Category-based partitioning** Operations are categorized according to the function they perform: i. Initialization. ii. Computation iii. Query iv. Termination **Integration Testing** OO does not have a hierarchical control structure, and thus, conventional top-down and bottom-up integration tests have little meaning. Integration testing can be applied in three different incremental strategies: - Thread-based testing, which integrates classes required to respond to one input or event. - Use-based testing, which integrates classes required by one use case. - Cluster testing, which integrates classes required to demonstrate one collaboration. Test cases should be designed so that they are explicitly associated with the class and/or method to be tested. The purpose of the test should be clearly stated. Each test case should contain the following: - A list of messages and operations that will be exercised as a consequence of the test. - A list of exceptions that may occur as the object is tested. - A list of external conditions for setup (i.e., changes in the environment external to the software that must exist in order to properly conduct the test). - Supplementary information that will aid in understanding or implementing the test. Fault–based Testing Any product must conform to customer requirements. Hence, testing should begin with the analysis model itself to uncover errors. Fault–based testing is the method used to design tests that have a high probability of finding probable errors in the software [24]. Fault–based testing should begin with the analysis and design models. This type of testing can be based on specifications (e.g. user’s manuals) or the code. It works best when based on both. Scenario–based Testing Scenario-based testing concentrates on what the customer does, not what the product does. It means capturing the tasks (use cases, if you will) that the customer has to perform and then using them and their variants as tests. Of course, this design work is best performed before the product is implemented. It is really an offshoot of a careful attempt at “requirements elicitation”. These scenarios will also tend to flush out interaction bugs. They are more complex and more realistic than fault-based tests. They tend to exercise multiple subsystems in a single test, precisely because that is what users do. The tests will not find everything, but they will at least cover the higher-visibility interaction bugs. Objective of Research This research work consists of the following: - Designing an object-oriented testing architecture template at the class diagram level. - Using this architecture we represent different operations for each testing technique and associated different attributes using certain testing technique operations with other testing operations (from a set of operations it is capable of performing, it changes its attribute values, which may cause changes to the attribute values of other objects). Preliminary Class Architecture The outcome of the present work is shown in figure 1, and the necessary discussion of the testing concepts involved is given here. In figure 1, object-oriented testing is divided into three parts based on their functionality. The first category consists of functional testing, class testing and its derived classes. This category is directly based on the requirements and specifications of software products, which involves the following: 1. Input the functional specification for function level testing of any testing tools. 2. Accordingly, functional specifications construct class-level testing. 3. Class level testing is dividing into two parts - partitioning class testing and random testing. Partitioning-based testing and random testing are derived from class-level testing and uses some properties of class testing. In the second category, integration-based testing is further divided into three parts - threads, cluster and use-based testing: 1. Thread-based testing integrates the set of classes required to respond to one input or event for the system. Each thread is integrated and tested individually. 2. Use-based testing begins the construction of the system by testing those classes (called independent classes) that use very few (if any) server classes. After the independent classes are tested, the next layers of classes, called dependent classes, that use the independent classes are tested. This sequence of testing layers of dependent classes continues until the entire system is constructed. 3. Cluster testing is one step in the integration testing of OO software. Here, a cluster of collaborating classes (determined by examining the CRC and object-relationship model) is exercised by designing test cases that attempt to uncover errors in the collaborations. The third part consists of fault-based testing and scenario-based testing. The objective of fault-based testing within an OO system is to design tests that have a high likelihood of uncovering plausible faults. Because the product or system must conform to customer requirements, the preliminary planning required to perform fault-based testing begins with the analysis model. The tester looks for plausible faults (i.e., aspects of the implementation of the system that may result in defects). To determine whether these faults exist, test cases are designed to exercise the design or code. 1. Fault-based testing misses two main types of errors: (1) Incorrect specifications. (2) Interactions among subsystems. When errors associated with incorrect specifications occur, the product does not do what the customer wants. Scenario-based testing concentrates on what the user does, not what the product does. This means capturing the tasks (via use-cases) that the user has to perform, then applying them and their variants as tests. Scenarios uncover interaction errors. However, to accomplish this, test cases must be more complex and more realistic than fault-based tests. Scenario-based testing tends to exercise multiple subsystems in a single test. CONCLUSION The maturation of testing techniques has been fruitful but not adequate. Pressure to produce higher-quality software at lower cost is increasing, and the existing techniques used in practice are not sufficient for this purpose. Empirical studies and fundamental research that addresses the challenging problems, development of methods and tools should be conducted so that we can significantly improve the way we test software. The successful use of these techniques in industrial software development will validate the results of the research and drive future research. The pervasive use of software and the increased cost of validating it will motivate the creation of partnerships between industry and researchers to develop new techniques and facilitate their transfer to practice. Development of efficient testing techniques and tools that will assist in the creation of high-quality software will become one of the most important research areas in the near future. This research work, first establishes a total set of requirement specifications for a comprehensive software-testing tool. In an object-oriented environment, these requirements will address various testing methods and strategies for object-oriented development scenarios. This work will propose architectural design object-oriented paradigms that will satisfy the established requirements specifications. These designs can be further translated into practical industrial tools. Future Work In addition, this study will propose a class diagram to use, which will be relevant for obtaining measurements of the proposed architectures. These measurements will be used to draw inferences for understanding the behavior of the metrics in relation to the proposed architectures for improving the designs by optimizing their quality. REFERENCES Proceedings of the IEEE ICECCS- 97, pp. 77-84.
{"Source-Url": "http://www.computerscijournal.org/download/4536", "len_cl100k_base": 5447, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 31462, "total-output-tokens": 7487, "length": "2e12", "weborganizer": {"__label__adult": 0.00035691261291503906, "__label__art_design": 0.00028777122497558594, "__label__crime_law": 0.00029969215393066406, "__label__education_jobs": 0.0007557868957519531, "__label__entertainment": 4.351139068603515e-05, "__label__fashion_beauty": 0.00013494491577148438, "__label__finance_business": 0.00010859966278076172, "__label__food_dining": 0.0003082752227783203, "__label__games": 0.000682830810546875, "__label__hardware": 0.0005211830139160156, "__label__health": 0.00029850006103515625, "__label__history": 0.00014913082122802734, "__label__home_hobbies": 5.555152893066406e-05, "__label__industrial": 0.00021791458129882812, "__label__literature": 0.0002465248107910156, "__label__politics": 0.0001779794692993164, "__label__religion": 0.0004012584686279297, "__label__science_tech": 0.0027904510498046875, "__label__social_life": 6.306171417236328e-05, "__label__software": 0.003925323486328125, "__label__software_dev": 0.9873046875, "__label__sports_fitness": 0.0002682209014892578, "__label__transportation": 0.00028324127197265625, "__label__travel": 0.0001621246337890625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34604, 0.01743]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34604, 0.76997]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34604, 0.91207]], "google_gemma-3-12b-it_contains_pii": [[0, 734, false], [734, 4957, null], [4957, 8658, null], [8658, 12446, null], [12446, 16098, null], [16098, 18232, null], [18232, 21703, null], [21703, 22692, null], [22692, 26577, null], [26577, 30260, null], [30260, 33849, null], [33849, 34604, null]], "google_gemma-3-12b-it_is_public_document": [[0, 734, true], [734, 4957, null], [4957, 8658, null], [8658, 12446, null], [12446, 16098, null], [16098, 18232, null], [18232, 21703, null], [21703, 22692, null], [22692, 26577, null], [26577, 30260, null], [30260, 33849, null], [33849, 34604, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34604, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34604, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34604, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34604, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34604, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34604, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34604, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34604, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34604, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34604, null]], "pdf_page_numbers": [[0, 734, 1], [734, 4957, 2], [4957, 8658, 3], [8658, 12446, 4], [12446, 16098, 5], [16098, 18232, 6], [18232, 21703, 7], [21703, 22692, 8], [22692, 26577, 9], [26577, 30260, 10], [30260, 33849, 11], [33849, 34604, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34604, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
6327b9e9f13f45de05da1447a01723963d25ccaf
Reducing Risks of Widespread Faults and Attacks for Commercial Software Applications: Towards Diversity of Software Components Marco Casassa Mont (marco_casassa-mont@hp.com) Adrian Baldwin (adrian_baldwin@hp.com) Yolanta Beres (volanta_beres@hp.com) Keith Harrison (keith_harrison@hp.com) Martin Sadler (martin_sadler@hp.com) Simon Shiu (simon_shiu@hp.com) Trusted E-Services Laboratory Hewlett-Packard Laboratories – Bristol, UK Abstract Recent IT attacks demonstrated how vulnerable consumers and enterprises are when adopting commercial and widely deployed operating systems, software applications and solutions. Diversity in software applications is fundamental to increase chances of survivability to faults and attacks. Current approaches to diversity are mainly based on the development of multiple versions of the same software, their parallel execution and the usage of voting mechanisms. Because of the high cost, they are used mainly for very critical and special cases. We introduce and discuss an alternative method to ensure diversity for common, widespread software applications without requiring additional computational resources. This method takes advantage of the componentisation of modern software solutions and enforces diversity at the installation time, by a random selection and deployment of critical software components. Randomisation criteria are adaptable to feedback gathered from software installations and affect software components’ lifecycle. We describe a few encouraging results obtained from simulations. Keywords: components, applications, diversity, deployment, faults, attacks, survivability 1. Introduction In the last two decades, commercial software has gone through a process of consolidation and homogenisation. The current commercial computing environment, both within the enterprise and home, is largely dominated by a few software systems, at the operating system (OS) level (e.g. Microsoft Windows, Linux, Unix, etc.), software development level (software frameworks such as Java and Microsoft .NET), application level (application suites such as Microsoft Office, etc.) and Internet access level (such as browser and web servers like those provided by Netscape, Microsoft, etc.) On one hand this process has lowered the costs of products because of the economy of scale and provided common platforms to simplify interactions. On the other hand there has been an increasing number of widespread attacks exploiting vulnerabilities of massively deployed software. Recently, code red [22], code blue and Nimda [23] worms caused huge problems to corporations and individuals by exploiting simple software vulnerabilities like the buffer overrun bug [8]. Large populations of users, employees and business have been affected causing economical and social problems. Software bugs and vulnerabilities have such a dramatic impact because the large number of identical installations makes it easy to exploit these faults as attacks and hence the absence of diversity increases the exposure of most systems on the Internet. Unfortunately software bugs are inevitable in most, if not all, software systems, especially with the current levels of complexity. The adverse effects of such bugs vary in severity but all are generally capable of causing faults and malfunctions and some can leave the software system vulnerable to external attacks. In view of the fact that every user’s installation of a specific software is identical, each installation will include the same bugs, and therefore vulnerabilities. As a result, large scale attacks on software systems are successful because computer hackers are likely to make the (correct) assumption that most, if not all, of the targeted operating systems or software applications are built in exactly the same way and, as such, have the same bugs and problems. Attacks can be tailored to each system, but recent viruses such as code red have caused untargeted systems, such as Internet enabled printers, to crash even though they are not the intended target. Similarly, a major fault or malfunction caused by a bug in the software system will affect all users in the same way. Concern has been expressed in the agricultural industries as the genetic diversity of crops is reduced to allow particular pesticides to be used. This can have the effect of reducing the resistance to particular diseases or where a disease strikes it can wipe out an entire crop. Analogies can be drawn to the eco-system of computers on the Internet where viruses evolve much quicker than systems change, yet the large number of identical systems enables a virulent virus to spread very quickly thus causing significant damage. Therefore it is believed that diversity is fundamental to prevent faults and attacks. Critical and special-purpose software and applications (like the software systems controlling nuclear power stations, aircraft and spacecraft, bank exchanges, etc.) are designed, implemented and deployed by keeping in mind the importance of ensuring operational survivability and reliability. These requirements are usually met by adopting very expensive solutions based on replication and independent software and systems. Unfortunately, the approaches used for critical software are not suitable for common and widespread operating systems, software and applications mainly because of the involved costs, the implications in term of economy of scale, the need for additional computational resources and the peculiarity of the targeted market. Despite this, we believe that diversity can also be achieved for common and popular software applications in respect of their cost effectiveness and constraints on required computational resources. In this paper we briefly describe some current techniques and mechanisms used to ensure diversity in software applications. We then introduce and discuss an alternative approach to software diversity aiming at the reduction of widespread software attacks and faults. This approach takes advantage of the componentisation of modern software solutions and enforces diversity at the installation time by randomly selecting and deploying critical software components. 2. Software Diversity: background and requirements The problem of dealing with faults and attacks for information, software and systems has been widely analysed and researched in the past. Software diversity is a key element to achieve protection [1] against both natural phenomena (including random failures, physical damages and corrupted information) and human actions (including design faults, interaction faults, malicious logic, intrusions and physical attacks). 2.1 Related Work N-version software diversity has been analysed and proposed [2], [18], [19] as a means of dealing with uncertainties of design faults. The basic concept is that having N independently developed versions of the software minimises the likelihood of coincident failures and vulnerabilities. The system is then built from these (three or more) separate software versions with a decision algorithm, for example a majority vote, determining the overall result. Diversity can be enforced not only at the software design level but also at the functional level [9]. Functional diversity is a way of forcing multiple design teams to be "intellectually diverse" in their solutions to the design problem. The N-version technique has mainly been adopted for critical and special-purpose cases, like software for flight control computers [3], [4], and design of nuclear reactor protection systems [5], [15] because of the high costs involved. The main objective of most of the work done on diversity is to achieve a higher reliability of software applications [16]. Whether diversity is a convenient means for delivering high reliability has been subject of debates and discussions [17]. Recently, diversity has also been investigated from the perspective of populations and ecosystems of software systems. Relevant research has been done on survivable systems, i.e. systems characterised by the ability to provide essential services even in the presence of intrusions and faults and recover full services in a timely manner [6]. Specifically, [7] describes systematic techniques to improve resistance to intrusions and attacks by diversification of system software, thereby increasing the cost and difficulty of identifying vulnerabilities. The approach is based on stochastic diversification and it is achieved by transforming a program into several versions each with additional logical complexities that obscures the behaviour whilst maintaining correct function and performance. 2.2 Requirements The core problem addressed by this paper is enforcing diversity for widespread commercial of the shelf software (COTS) in order to reduce the risks of large-scale attacks and other failures. We target large and homogeneous populations of commercial software installations, commonly used for day-by-day business and consumer tasks. Examples of these populations include enterprises (large number of employees’ PCs having the same software install-base), Internet communities of people and organisations sharing similar interests. In this context, the problem of making a specific software installation survivable to a fault or an attack is secondary to the problem of minimizing the effects as an attack spreads and maximizing the number of working systems within the population. The impact of an attack or a fault on commercial software on a single installation is usually minimal especially when common security policies (like periodic data backup, virus checking, etc.) are put in practice. On the contrary, it is the transmission of attacks over a larger population, in a short period of time, that creates the serious economical and social damage; for example, it can cause the interruption of network and e-mail communication, leading to the interruption of business processes. A further issue is the clean up costs where considerable effort from technicians is required to stop viruses and worms from spreading by applying patches and recovering from compromises. The basic requirements for diversity in common commercial software can be summarised as: - Provide mechanisms to avoid faults and attacks that quickly propagate over a large population of installations; - Preserve the relatively low costs of COTS (due to the economy of scale); - Avoid the need for extra computational resources. Special-purpose solutions traditionally used in the N-version approach do not fulfil those requirement and they represent an over-engineered approach to the specific problems addressed in this paper. It is also not really clear if commercial software developers are willing to embrace diversity techniques based on obfuscation of the deployed code [7]. Next section describes an alternative approach based on existing mechanisms for the design and development of software systems. This approach introduces an element of diversity at the deployment time, without requiring any modification of the deployed code or additional computational resources. 3. Proposed approach The approach proposed in this paper exploits the componentisation and object-oriented aspects of modern software: current operating systems, software applications and solutions are built from software components, each of them implementing specific well defined functionality. Software engineering techniques dealing with software life-cycle management have been around for years and are commonly used during software development projects. For example, tools for software modelling, based on UML [10] or similar techniques, provide mechanism to model, design, refine, implement, test, deploy and maintain complex software systems and applications. Specifically complex software applications can be analysed from structural and behavioural aspects, different views can be provided at different levels of abstraction, ranging from high level classes and objects (and their relationships) to the physical software components that are going to be deployed. During software design and development, designers and engineers should also go through risk management activities, which include: identify critical software components, their vulnerabilities to potential attacks and faults, and mitigate the involved risks. The methodology for identifying critical components would be different from traditional critical system tasks. It may be that the complex algorithms at the heart of the system are considered critical and therefore must be well engineered. It should also be recognized that the most vulnerable components are also highly critical – this suggests that external-facing components should be considered critical. It is software bugs in these external-facing components that often become subject to attacks such as buffer overflow attacks providing viruses and hackers with a way into the system. The critical components are not necessarily those directly developed as part of an application. As application development frameworks become more advanced and include many more base libraries (such as Java and Microsoft’s .Net) bugs in these underlying libraries could negate the advantages of diversity. Diversity could be introduced at the level of these frameworks as well as, or instead of, at the application layer. The proposed model makes use of multiple implementations of critical components. Because of the separation of concerns between the design and the implementation phases, modern software development tools allow the development of multiple implementations of the same software component, in a way that is compliant with defined interfaces. We relax the constraint of having multiple implementations of the whole software applications (as mandated by most of the N-version techniques) as we concentrate the effort only on critical components. 3.1 Model A commercial software application is generally supplied by a software provider as a package on some form of storage medium, including its components and installation software which, when run on the customer’s computing environment, installs the various components for future use. The individual components included in each package are generally identical to components on other packages provided to other customers. Usually the result is that all the software installations are substantially identical. The user may install different options and various patches and service packs bringing a degree of diversity; however, many corporate systems will have a software repository where the company standard is issued with standard options and patches. In our model, we introduce an element of diversity at the installation time by modifying the installation process. Multiple implementations of critical components are available in the installation package. For each critical component a software installer randomly selects and installs one of the available implementations. Figure 1 shows the model of a system implementing this approach: ![Figure 1: Model](image-url) Software is distributed by means of an installation package which include three basic parts: - A software components bag; - An installation script; - A software installer. The *software components bag* contains the components used to form the software application. Software components might include COM components, EJB components, .dll libraries, .exe executable files, configuration files, etc. For each critical component multiple implementations are available. For example, in Figure 1, components A and C are critical. Two implementations are available for component A and three implementations are available for component C. The *installation script* contains the necessary information to successfully install the software application, including the list of all the available components, the installation sequence and dependency constraints. The *software installer* is the core part of the installation package. It contains three modules: - Installation Engine; - Random-selector module; - Installation knowledge base. The *installation engine* is in charge of interpreting the installation script and installing the software application. This engine interacts with a *random-selector module* each time a critical component (having multiple implementations) has to be installed. The *random-selector module* is driven by a random-function that, given a critical software component, randomly selects one of the component implementations at random. This function can be constrained by information contained in the installation knowledge base. The *installation knowledge base* is a local database containing contextual installation information. This information might include the status of other installations and the evolution of a particular installation over its lifetime (including changes due to patches, upgrades or maintenance). It may also include known bad combinations where components have known faults when installed on particular OS versions. In particular contexts, like enterprises and large organizations, a variant of our model can be used to install a particular software system on a number of computers. In this situation, the selection of critical software components to be installed may depend upon which implementations of components have previously been installed on other computers. The information necessary for making such decisions is stored in the installation knowledge base. base. This can ensure that there is sufficient diversity in a computing environment; for example, a server farm thereby ensuring a degree of resilience. Each installed software application has a proper identity defined by the sequence of the installed components. This sequence is a sort of e-DNA. The installer stores this sequence in a local persistent configuration file along with a copy of the installation knowledge base. Another variant of the model uses an installation mechanism provided by a centralized installation service, for example within an enterprise. This approach facilitates the collection and management of configuration information associated to each installation for future software maintenance or upgrades. After the installation process, for security reasons, the software installer makes sure that implementations of critical components that have not been installed are deleted from the platform where the software is installed. 3.2 Properties The proposed model introduces an element of diversity into the software at installation time without the constraints of the traditional N-version software. It is not as expensive or impractical as the N-Version approach as it does not require several distinct full implementations of the same software and their parallel executions. However, it protects a population of systems rather than any particular system and as such does not provide a solution for safety critical systems. Not all the components need to have multiple implementations. At the end of the installation phase, a copy of the software application is installed as usual but with a potential unique combination of software components. Every installation of the same software application is potentially different but its functionalities, interfaces and expected behaviour are the same. The degree of diversity directly depends on the number of critical components, the number of available implementations and the selection criteria in the random function. This approach does not prevent a specific installation of an operating system or software application from being subject to fault or being attacked: it is likely that components will still have software bugs and vulnerabilities. Nevertheless, it reduces the risk of massive propagation of faults and attacks to large population thanks to the intrinsic diversity of each installation. With this approach it is also less likely that two or more installations of the same software will crash due to the same fault, at the same time, when executing similar operations. Hacking techniques taking advantage of bugs in specific component (or due to the combination of specific components) may gain information from a specific installation but the chances of this being applicable to other systems (using different components) is very much reduced. 4. Experiments The discussion so far has claimed that adding diversity into a population of systems increases its robustness, particularly when attacked by viruses that take advantage of common bugs. A simulation of the spread of a virus has been carried out to demonstrate some of the properties that increased diversity would achieve. The simulator created a number of virtual machines, each with its own IP address, and a list of components, along with implementations (versions) of each component. A virus with a propagation mechanism similar to code red [22] was then simulated where the virus infects by using a bug in a particular version of a component. Once a machine is infected the virus tries to spread to other machines by generating IP addresses at random according to the current machines sub-mask; thus the probability of picking local machines is high but there is a sufficient chance of IP addresses outside of the local network to ensure a world wide spread. The virus then pings the other IP addresses and attempts to infect those it finds using the same bug. Each infection tries to infect 200 other machines and then remains dormant – in the case of code red a security hole allowing access to all files remained in place. ![Figure 2: Experiment 1 – Increasing the diversity of components where a virus attacks a single version](image) The first experiment simulated 6000 systems on a sub-net. A number of simulations were run with increasing diversity, from 1 to 6 versions, in the component targeted by the virus. Figure 2 shows the variation in the rate of infection over time. Two factors are worth noting: firstly since only one version of the component is being infected the final number of infections is inversely proportional to the number of components’ implementations; secondly the rate of infection is slowed as it becomes harder to find susceptible systems. It is also worth noting that some diversity can be very valuable but as the diversity increases the rate of slowdown in infection rates decreases and as such there are clearly diminishing returns. The second experiment was carried out on the same set of systems but looked at the effect of having viruses attacking all versions of the components. Separate viruses were created to attack a component with 1, 2 and 3 implementations. Figure 3 shows the infection rates over time. The infection rates do saturate although increasing the number of components implementations does delay the rate of infection and it also delays the peak in network traffic due to the virus by a corresponding amount. This delay in infection rate is due to the reduction in the probability of finding a vulnerable system and hence will be inversely proportional to the number of implementations of a component. This gives system administrators a larger window in which to clean up machines and install the necessary bug fixes. These experiments show that there is a clear advantage to increasing diversity of standard components to help in managing attacks. It is clear from the results that both the number of infected systems and the speed of infection are inversely proportional to the level of diversity. It is worth noting that a composite virus such a Nimda [23] that infects via many software bugs will increase the infection rate. It is clear that a small amount of diversity in many standard components will bring considerable gains but after that the returns will diminish. 5. Discussion The feasibility of the proposed model has to be validated against real-world scenarios. Section 6 describes our plans for tests and further experiments while this section discusses general software engineering and operational aspects relevant to the model. The proposed model does require the development of multiple versions of software but it restricts this requirement to critical components. Even if it relaxes the constraints introduced by the classic N-version approach, particular attention has still to be paid during two critical phases: - Risk analysis for potential vulnerability and subsequent identification of critical components; - Software testing phase. If the risk analysis phase is not properly executed, the misjudgement of which components are critical could seriously compromise the effectiveness of the diversity introduced at installation time. On the other hand, an extended usage of this technique (by including components that potentially are not critical) might increase the overall complexity of writing and maintaining the software and the associated costs. The software-testing phase must include white and black box testing activities for each implementation of a software component. Modern software engineering and development tools provide mechanisms to define interfaces and behavioural specifications for software components. Multiple implementations of each software component should be tested against those specifications. Testing all the possible combinations of the software components can be extremely expensive. On one hand the fact of having a large set of possible combinations of software components is the strength of this approach. On the other hand it introduces complexity. The testing phase of the complete software application can still be done on an empirical base, by testing a reasonable set of installations of the software, generated in a random way. By doing so, particular faulty combinations of software component implementations can be detected in advance and avoided during the installation of the software (by storing this information in the installation knowledge base of the installation package). Gathering knowledge from software installations is extremely important for software producers, not only during the testing phase but also during the whole software lifecycle (maintenance, upgrades, etc.). It is important for a software producer to collect information about bugs and undesired behaviours from the population of software installations in order to correct faults and avoid the occurrence of faulty combinations of components in future installations. This task is simplified by the fact that each software installation has an identity (its e-DNA) describing the particular combination of deployed components. The information collected by monitoring for problems and issues related to deployed components can ultimately be used to make decisions about the destiny and evolution of specific components (modify, extend, abandon, etc.) or combinations of components. In large enterprises and organisations the task of monitoring large population of software installations can be delegated to traditional IT support centres, who can then interact with software providers. Definitely, the software installer module plays a key role in ensuring a correct installation of software components and the enforcement of particular installation policies. It is a trusted module. The overall installation package must be properly secured to guarantee its integrity and trustworthiness (by digitally signing its code and potentially obfuscating its modules). If centralised within an enterprise or organization, the software installation service plays the role of a trust service [11] and it must be accountable during software installation, information gathering and maintenance management. The proposed approach to software diversity is potentially suitable not only for traditional software producers but also for open source software. In both cases it is important that component interfaces and expected behaviours are clearly defined and specified at design time. Specifically, the open-source initiative can take advantage of the willingness of lots of participants to contribute to the development of software solutions: multiple implementations of software components can be made available in software packages and installed using our approach. 6. Current and Future Work In addition to the experiments made by simulations, we are also investigating the feasibility and effectiveness of our model by means of practical experiments involving widely distributed software applications. Our tests will include experiments with software applications that provide long-term storage of digital documents [20] and distributed software agents that support storage and replication of data [21]. We are planning to re-develop these applications by providing at least two different implementations for each critical component and create multiple populations by deploying such applications, including one where applications are deployed in a classic way, without diversity. Experiments are going to help us to better understand the effects of the random aggregation of components at the deployment time, measure the efficacy of the random selection module and understand the feasibility of adaptation mechanisms. We are also going to observe and measure for real the effects of attacks (exploiting vulnerabilities introduced by software bugs) on populations created by using our diversity approach and compare them against a population deployed in a conventional way. In terms of future work, we are planning to investigate the feasibility of our approach for advanced e-commerce scenarios, whereby multiple implementation of core e-services (like electronic payment services, billing services, booking services, etc.) are available to consumers and enterprises (for example by using UDDI servers [12]) and are composed on-the-fly [13], [14] to obtain added-value e-services. In such a context the composition of web services will happen by randomly selecting and aggregating core web services with equivalent functionalities and compliant with user’s requirements (contractual clauses, specifications, QoS policies, etc.) 7. Conclusion Today it is of primary importance to deal with lack of diversity in widely deployed commercial software as faults and attacks quickly spread across large population of identical installations creating enormous economical costs. Current approaches to diversity, based on multiple versions of the same software, potentially running in parallel on different computational resources, are too expensive and are mainly used in critical and special-purpose cases. This paper introduces an alternative approach to diversity which takes advantage of the componentisation of modern commercial software. Critical software components are identified during the risk assessment phase and multiple (functionally equivalent) implementations are developed. These multiple implementations of components are distributed within software installation packages. At the installation time, an installation module randomly selects and installs an implementation of each critical component, in respect of potential pre-defined constraints and policies. The proposed system can take account of problems encountered in a large population of installations of the same software application. Components might evolve during their lifetime. Criteria for randomly selecting software components can adapt dynamically so that problematic combinations of components are avoided and specific faulty components are modified or banned. Software developers need to clearly specify software component interfaces, their behaviour and identify critical components by assessing their vulnerabilities and the involved risks. Our experiments based on simulations show that there is a clear advantage to increasing diversity of standard components to help in managing attacks. It is clear from the results that both the number of infected systems and the speed of infection are inversely proportional to the level of diversity. The feasibility and efficacy of the proposed model has to be verified in real-world scenarios. 8. References
{"Source-Url": "http://www.researchgate.net/profile/Adrian_Baldwin/publication/221028233_Reducing_Risks_of_Widespread_Faults_and_Attacks_for_Commercial_Software_Applications_Towards_Diversity_of_Software_Components/links/0912f50e4a4b8c890f000000.pdf", "len_cl100k_base": 5530, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 58772, "total-output-tokens": 7365, "length": "2e12", "weborganizer": {"__label__adult": 0.0003285408020019531, "__label__art_design": 0.00035119056701660156, "__label__crime_law": 0.0007500648498535156, "__label__education_jobs": 0.0004665851593017578, "__label__entertainment": 8.928775787353516e-05, "__label__fashion_beauty": 0.00012993812561035156, "__label__finance_business": 0.0003800392150878906, "__label__food_dining": 0.0002741813659667969, "__label__games": 0.0008687973022460938, "__label__hardware": 0.0010251998901367188, "__label__health": 0.0004472732543945313, "__label__history": 0.00018787384033203125, "__label__home_hobbies": 6.866455078125e-05, "__label__industrial": 0.0003185272216796875, "__label__literature": 0.000270843505859375, "__label__politics": 0.00021648406982421875, "__label__religion": 0.00026917457580566406, "__label__science_tech": 0.041015625, "__label__social_life": 8.207559585571289e-05, "__label__software": 0.0294189453125, "__label__software_dev": 0.92236328125, "__label__sports_fitness": 0.00017404556274414062, "__label__transportation": 0.00029206275939941406, "__label__travel": 0.0001418590545654297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35902, 0.02026]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35902, 0.61985]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35902, 0.91887]], "google_gemma-3-12b-it_contains_pii": [[0, 1640, false], [1640, 4959, null], [4959, 7622, null], [7622, 10436, null], [10436, 13387, null], [13387, 15106, null], [15106, 17524, null], [17524, 20366, null], [20366, 22460, null], [22460, 24098, null], [24098, 27089, null], [27089, 30129, null], [30129, 32604, null], [32604, 34443, null], [34443, 35902, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1640, true], [1640, 4959, null], [4959, 7622, null], [7622, 10436, null], [10436, 13387, null], [13387, 15106, null], [15106, 17524, null], [17524, 20366, null], [20366, 22460, null], [22460, 24098, null], [24098, 27089, null], [27089, 30129, null], [30129, 32604, null], [32604, 34443, null], [34443, 35902, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35902, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35902, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35902, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35902, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35902, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35902, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35902, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35902, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35902, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35902, null]], "pdf_page_numbers": [[0, 1640, 1], [1640, 4959, 2], [4959, 7622, 3], [7622, 10436, 4], [10436, 13387, 5], [13387, 15106, 6], [15106, 17524, 7], [17524, 20366, 8], [20366, 22460, 9], [22460, 24098, 10], [24098, 27089, 11], [27089, 30129, 12], [30129, 32604, 13], [32604, 34443, 14], [34443, 35902, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35902, 0.0]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
d84a5f7c108ad785235953320c2c7b7fc3393835
Performance Learning of Agile Methodology Using Paired Courses of Systems Analysis and Design and Web / Mobile Programming Edward V. Weber EWeber@Millikin.Edu Tabor School of Business Millikin University Decatur, IL 62522, United States of America Abstract In an Application Development specialization within an Information Systems curriculum, the concepts of the Agile methodology of application development are often explored in a variety of courses including Systems Analysis and Design, Web and Mobile Application Design, and various levels of Programming courses such as Introductory, Intermediate, Advanced, Web, Mobile, etc. Too often, however, these Agile concepts are only being presented and discussed as a presentation of theory. Within each isolated course the Agile concepts typically cannot be fully practiced for one primary reason: there may not be sufficient resources available having the required complimentary skill set to complete the Agile team environment. In other words, rarely can the Agile concepts be fully practiced within the confines of a single course. This paper seeks to inform how two courses have been successfully paired together to afford the Performance Learning aspect of implemented and practiced Agile development within both courses. Keywords: paired courses, Performance Learning, IS Curriculum, Agile methodology, application development, analysis and design, programming, undergraduate education 1. BACKGROUND IS Curriculum Content Undergraduate students in an Information Systems program consume a variety of courses to develop a diverse set of skills which fully prepare them for their future roles within an IS organization. The Association of Computing Machinery (ACM) has created, updated, adopted, and otherwise provided curricula recommendations for years. The IS 2010 Curriculum Guidelines provides an excellent framework for structuring an overall IS program and is used by many institutions in the planning and development of their IS curricula. (Topi, Valacich, Wright, Kaiser, Nunamaker Jr, Sipior, & de Vreede 2010) While there does seem to be debate in the literature as to how much the IS 2010 Curriculum Guidelines are being implemented or to what degree they are being strictly followed, much of this debate seems to stem around the differentiation between the most fundamental of program definitions (e.g. between IS, vs. IT, vs. CIS, vs. MIS, etc.) For the remainder of this paper, it should be noted that this approach has been endeavored within an Information Systems program which is situated within the Tabor School of Business at Millikin University, a small private University in Decatur, Illinois. When considering material for inclusion within specific IS courses, these guidelines, coupled with professional experiences may help to inform a content developer regarding specific topics and methods of delivery. For example, it is a given that any introductory Programming course will necessarily include concepts such as variables, data types, expressions, operators and operations, decision logic, looping, etc. Likewise, a Data and Information Management course will include concepts such as data models, entity relationships, object-oriented models, data types, indexing, the role of a DBMS, database languages like SQL, DDL etc. In these basic descriptions defined above, the observant reader may have recognized the intentional duplication of the topic named ‘data types’. This is to help the reader recognize a typical pedagogical concept of topic overlap between courses. Content Overlap One ongoing concern for course developers and deliverers is how to ensure that sufficient resources are dedicated to each critical topic in each course while acknowledging that certain topics will necessarily be repeated, in varying degrees (either as original material or as review material), across multiple courses. For example, the concept of ‘data types’ will necessarily be discussed in each of the Programming courses as well as in the Data and Information Management courses as well as in the Analysis and Design courses. Considering that these individual courses may not typically be required to be taken in series, it becomes obvious that within each course, certain overlapping concepts may become an issue of either under- or over-coverage and this variability occurs between courses and between students within courses. Course content developers and deliverers can often ‘discover’ which overlapping topics need additional coverage for their students and which have been sufficiently mastered and can typically adjust their lesson plans accordingly. In other words, if a basic review or pre-test of the concept of ‘data types’ reveals insufficient mastery of the concept, the required material can be covered and appropriate practices can be undertaken and assessments can be made to insure sufficient concept mastery. One of the biggest problems with this approach is that from course to course and even from student to student, there is an ongoing risk that there will be a significant gap or range between students with insufficient topic mastery and those students with topic proficiency. As a result, resources external of the course meeting times can be made available to help bridge this gap. Everything from asynchronous online materials, practice materials, and one-on-one tutoring are just some of the options for this type of remediation. For example, after the successful completion of one or two courses that included the concept of ‘data types’, a course deliverer will have an expectation that the students have a sufficient mastery of this concept. If a student appears to be lacking at this time, these aforementioned remedies can be employed to bring this student’s performance up to the required level to enable the student to then continue with the new content. Therefore, it is safe to say that while course content overlap is not a new discovery nor are its methods of remediation new, it does, in fact, still remain a pedagogical issue. Curriculum Model Evolution With the IS 2010 Curriculum Guidelines, there was an intentional ‘flattening’ of the curriculum structure from the previous IS 2002 model in a direct effort to “...offer a flexible structure that can integrate electives easily” (Topi et al., 2010). This flattening has resulted in the removal of intentional sequencing of courses. This has resulted in a necessary increase in the number of topics that experience this type of problematic content overlap as previously discussed. While some of this additional content overlap can be absorbed into the resulting new curriculum via the previously defined remedies, there is at least one topic (and most probably more) that cannot be sufficiently addressed within the constructs of a single course. 2. PROBLEM STATEMENT In the previously discussed examples, specific course content and concepts may necessarily overlap among multiple courses. As a result, at any of these various consumption points within the curriculum, individualized techniques may be utilized to ensure that the students achieve sufficient skills mastery. However, certain new and evolving concepts and content cannot be managed in this traditional way and require additional or different methods of ensuring sufficient student skills mastery. One such topic is the Agile methodology and how it contrasts significantly from the traditional Waterfall methodology for Systems and Application Analysis and Design. In the traditional Waterfall methodology of the Systems Development Life Cycle (SDLC), a definitive sequence exists (Figure 1) which allowed for curriculum development to mirror the sequence of critical skills acquisition. For example, the critical skills necessary to execute the Development phase of the Waterfall SDLC methodology could be fairly isolated within the learning objectives associated in the Programming courses with only minor overlap between the Design and Testing phases. Likewise, the critical skills necessary to execute the Requirements and Design phases could be fairly isolated within the learning objectives of the Analysis and Design courses with only minor overlap with the Development phase. However, just as the IS 2010 Curriculum Guidelines now recommends the flattening of the curriculum to intentionally remove the sequential nature of content presentation, this flattening is likewise represented in the newer application development methodologies such as those represented in the Agile methodology (Figure 2.) Whereas the Waterfall methodology represents a single flow of activity over the entire span of the project, the Agile methodology represents an iterative collection of activities that are, themselves, repeated multiple times in short bursts or cycles. The Agile Manifesto states as one of its core principles an intent to “Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale” (Fowler & Highstreet, 2001). As a result of this cyclic nature, the entire complement of skillsets required to effectively operate in an Agile environment must be fully developed or available. With the Agile methodology, application development exists along a continuum of work with skillsets consistently overlapping. The cyclic design of this methodology requires ongoing and continuous overlap of skillsets and is typically implemented using team members with expertise in the multitude of skill areas. While the accumulation of some of the required skills necessary to successfully operate within an Agile methodology can still be fairly isolated within specific coursework, certain aspects of Agile methodology requires the existence of a multitude of skills which often are not fully developed within a single course. One approach to teaching Agile and some of the subordinate components of Agile such as Scrum, XP, etc. is to utilize a full course as Mahnic presents by allocating a capstone course specifically on Agile Software Development using Scrum. (Mahnic, 2012) In this approach, Mahnic states, “In pursuing the challenge of combining formal with practical learning, the design of the capstone course can rely on previous courses having provided formal lessons on the aforementioned topics, so just a short theoretical introduction to agile methods (including the reason for their importance) is enough to encourage students’ buy-in before starting the practical work.” (Mahnic, 2012). Considering the significant growth of Agile in the IS workplace, it seems more desirable to have earlier incorporation of this methodology in multiple places within the undergraduate curriculum. This also affords the student multiple opportunities for Performance Learning of this critical component rather than waiting for a single capstone course to serve as a single source for this experience. But unlike Mahnic’s capstone course approach, the paired class approach understands that not all of the individual students will have all of the requisite skills for full participation in the Agile team. For example, Cervone defines an Agile Scrum team as a cross-functional team of five to ten people who work on the project full time (Cervone, 2010). Within a cross-functional Agile team, the skills of needs assessment, analysis, design, programming, testing, and documentation are all required. Typically, these skills are found in the various team members including business unit experts, analysts, developers and other members of a fully cross-functional team. But all of these skills are not fully developed for undergraduate students within a single course content. Therefore, in traditional undergraduate courses, some required content is typically sacrificed and, as a result, the practice of Agile concepts cannot be fully representative of how the work is accomplished in ‘the real world.’ Consider attempting to practice the tenets of Agile methodology within programming courses. For programmers to be productive, the required definitions of application inputs and system outputs must be available and these are typically identified within the context of analysis and design. Therefore, although these components are required in order to complement the programming learning, these components must be artificially or externally supplied if the programming course was not sequenced within the curriculum to reside after an analysis and design course. In other words, programming students who have not mastered the skills of analysis and design will be ill-prepared to fully produce without some external resources. And yet, the nature of the Agile methodology requires these skills to be available to the Agile team through its membership. Whereas this sequencing of coursework was formally a part of the curriculum structure as well as a part of the Waterfall SDLC methodology, this sequencing is no longer supported in either recommended curriculum design nor in the Agile SDLC methodology. Therefore, the skills required to effectively practice the Agile methodology are not typically nor consistently available to all application development students at predictable intervals within the curriculum. 3. APPROACH Performance Learning in IS In order to provide a more accurate representation and direct opportunities to practice concepts learned in the classroom, Performance Learning is utilized within the IS curriculum at Millikin University in Decatur, Illinois. Performance Learning is defined at this institution as “…the opportunity for students to experience real risk and reward while having their work evaluated by a third-party stakeholder” (Podeschi, 2015). In the Performance Learning environment of IS, course content is immediately applied by students utilizing newly acquired skills while working on real-world projects for real-world third-party stakeholders with real-world risk and rewards. Performance Learning is typically not utilized during the foundational coursework. Likewise, in accordance with the IS 2010 Curriculum Guidelines, the foundational coursework is prerequisite to the more advanced courses within the Application Development curriculum. However, after students have successfully completed Foundations of Information System and the other core IS courses, students may enroll in the Systems Analysis and Design course, or the Web/Mobile Programming course, or they may co-enroll in both in the same semester. In both the Systems Analysis and Design course as well as the Web/Mobile Programming course, the concept of Agile development is fully explored and the relationship of these two courses becomes obvious. But in order to conduct Performance Learning with the concept of utilizing the Agile methodology, a single project needed to be chosen that would afford the opportunity for these two classes to work together while independently learning the content that is unique to each course. This technique has been successfully applied in the Fall semesters of 2014 and 2015. In the first year, an internal University project was undertaken to develop a fully automated, data-driven, web-enabled, self-maintaining, searchable, organizational hierarchy chart for the University. In the second year, an external project was undertaken to provide consulting and systems enhancements to a home-grown inventory management, point-of-sale, and customer management web-based system for a local health food and supplement retail store. Performance Learning of Agile The implementation of Performance Learning for the concept of Agile methodology was achieved by pairing the two independent courses of Systems Analysis and Design and Web/Mobile Programming together to create a setting to more fully explore the Agile concepts within a single semester. In each course, the first quarter was dedicated to new content and material that was specific to each course. Also in the first quarter, Agile methodology was explored in both courses with shared content, terminology, and discussions of how the concepts integrate within the other courses. In the first year, the Systems Analysis and Design course had 16 students and the Web/Mobile Programming course had 7 students. In the first year, 3 of the students were co-enrolled in both courses. In the second year, the Systems Analysis and Design course had 14 students and the Web/Mobile Programming course had 6 students. In the second year, 5 of the students were co-enrolled in both courses. In both years, the Faculty member served as the Project Owner in conjunction with the third-party stakeholders. The primary reason for this was that the third-party stakeholders themselves were neither fully prepared to serve the role as content experts nor were they fully versed in the aspects of Agile development. This activity was, in fact, a learning opportunity for them as well. In both years, the Agile methodology was utilized with narrowly specified objectives to accommodate the significantly reduced time frames within a single semester. The goal was to get the students into the Agile mindset of rapid development and delivery in an iterative mode. In the first year, in precisely one class period, the Systems Analysis and Design class created a mockup layout of their desired design for the application and had documentation ready to present to the Web/Mobile class. In the following Web/Mobile programming class, the students created the prototype of the entire driver page which would become the backbone of the entire application. In just two 75-minute sessions, the framework of an entire application was analyzed, designed, built, and ready for initial testing by developers and users alike. In the second year, the Systems Analysis and Design class had planned an initial scope of making desired enhancements to the existing client Web application. While a team from this class was meeting with the client to confirm project scope, the Web/Mobile Programming class created a development environment to mirror the existing client production environment. Both teams experienced significant requirements changes as a result of their efforts. The Systems Analysis and Design class discovered that the author of the existing application had recently passed away and he represented the only IS personnel for the entire organization. Additionally, they learned that the author was self-taught, there was no system documentation of any kind, and there were some compiled C++ modules, in addition to the PHP web application, that were being used in production and which had no discoverable source code. The Web/Mobile Programming class discovered that the existing web application was configured in a manner in which all critical errors, warnings, and deprecated code errors were being suppressed from displaying in the production environment. When they created a working development environment and configured it so that they could properly code, test, and debug, they discovered over 140 PHP, JavaScript, and HTML errors throughout the application. In addition, they discovered some non-normalized data structures in the existing database as well as some queries and views that were failing because they were running so long that they timed-out before completion. When the two classes conducted their backlog meetings to detail their discoveries, they each immediately experienced the benefits of Performance Learning in the Agile development environment. Both classes recognized that adaptation was necessary to reconcile this real-world client’s actual needs vs. their perceived needs. Both classes adjusted the overall project objectives and deliverables with a new focus of creating a stable environment and documenting the existing systems to create a foundation for further analysis, design, and development. Performance Learning using Agile methodologies allowed the students to experience rapid development and the ability to adjust to rapidly changing requirements while remaining productive. 4. DISCUSSION There are a number of successes and challenges to this approach of pairing two courses (Systems Analysis and Design and Web/Mobile Programming) for the intent of creating a Performance Learning environment for Agile development. First we look at the successes. Successes With the paired course approach, students from one class could be intentionally teamed with members of the other class and also have at least one member who was co-enrolled in both classes. This mingling of students across classes is, by far, the greatest success of this approach. By structuring the teams of the classes this way, the students immediately experience the cross-functional nature of Agile teams in that some students will have expertise (or personal preference) for the Analysis and Design work while others will be specialized in the Programming or technical aspects of the work. This allows the content deliverer to focus on the role of being the Team Leader and facilitator of the students’ interactions with the client. Another achievement of this approach is that students learn through Performance Learning that Agile succeeds through rapid performance in short, potent, iterative sprints. By intentionally narrowing the scope of the sprints, the students quickly experience the successes of their efforts which keeps them highly motivated. Students who were co-enrolled in both classes consistently acknowledged how their co-enrollment afforded them a significant advantage in understanding the bigger picture of the project and also how it prepared them to recognize team member strengths and limitations and how to adjust their own efforts accordingly. Delivering the course content of Agile concepts became easier because co-enrolled students became de facto teacher assistants or tutors of the concepts across both courses. Shared understanding became essential for both courses and was rapidly achieved. Without this approach, Performance Learning of this concept cannot accurately occur in that there are insufficient resources in each of the single courses to properly satisfy the roles of the cross-functional team members. Without this approach, the content deliverer must attempt to satisfy all of the missing roles which does not provide an accurate representation of the actualities of Agile development. Challenges The biggest challenge that exists with this approach is the issue of time. Once students understand the fundamental concepts of Agile methodology, they quickly understand that the successes are achieved from the rapid cycles of Design, Build, Configure, Test, and Release. But they also recognize that having team members from both classes (as well as internal or external third-party clients) means that they will need to find time (outside of each of the regular class meeting times) to hold team meetings. This challenge is becoming one of the greatest issues across many undergraduate programs: Students are expected to work and perform in cross-functional teams with diverse membership but there are no intentional scheduled time allowances to afford the students time to meet. In contrast, in the ‘real world’, most Agile team members are in the same organization and, as designated members of an Agile team, they are afforded intentional time allowances from their respective managers to perform the work required of an Agile team. In this paired course approach for Performance Learning of the Agile concepts, the content is more successfully delivered and most of the concepts become quickly absorbed by the students. But then, the reality sinks in. The multitude of students having significantly divergent courses and activities with often over-filled time demands creates an exceptional hardship in trying to get the students to find coordinated time for all of the critical Agile team member activities. This issue does not appear to be an issue of student motivation. In most cases, the students fully understood what was required and were willing to fulfill their respective roles. However, there was consistent feedback that indicated that finding *coordinated time* amongst Agile team members (including both classes and the user community) was the number one impediment to the teams’ ongoing productivity. Another challenge to this approach involved the students who were co-enrolled in both classes. While some of these students expressed pleasure in the benefits of co-enrollment (i.e. seeing the ‘big picture’, serving as an intermediary, possibly taking leadership roles, etc.) others expressed frustration and feelings of being overwhelmed (i.e. feeling like they were doing most of the work because they were in both classes.) This challenge puts an additional burden on the deliverer of the course content to properly establish and communicate course and assignment rubrics that equitably and consistently describe individualized performance expectations. Another challenge that exists in this approach is that by the nature of Agile development, once the concepts are mastered and the actual iterative cycles commence, there is a natural ramp-up that occurs and, very quickly, the whole process is moving ahead at a fairly fast pace. Students that are struggling with individual course content run the risk of quickly falling behind, and therefore may become underperformers within the Agile team, which becomes a risk to both their own motivation and success as well as the team’s. Course deliverers must be prepared to recognize this challenge very early on and take corrective measures to ensure that no team member is left behind. Also, just as in the real world, this may eventually include reassignment of non-performing members. This also becomes part of the Agile content discussion, as team members come and go in the real world as well. 5. CONCLUSIONS When using the IS 2010 Curriculum Guidelines to refine an IS Application Development program, there may be a desire to limit the number of sequential courses beyond the foundational courses to encourage students to explore more diverse IS content across specializations. This creates a problem of IS content that becomes necessarily repeated (either as original content or as review) as threaded concepts that must be touched on in multiple courses. Some content, such as Agile methodologies and Application Development, can be discussed thoroughly in any number of discrete courses, but cannot be fully experienced by the students without a true Performance Learning opportunity. To provide this opportunity, two courses such as Systems Analysis and Design and Web/Mobile Programming can be ‘paired’ in a given semester to create an environment in which a Performance Learning project can be selected. These paired courses could have only individual member students but, ideally, they would have some number of students who co-enroll. Original course content is focused in both courses within the first quarter so as to lay a foundation for performance for the rest of the semester. Then, Agile concepts are delivered in both courses in a thorough and consistent manner. Students can then be assigned into Agile teams with representatives from the individual classes and, whenever possible, members who are co-enrolled. In this manner, students can directly experience the Agile concepts at work in fully cross-functional teams. Care must be taken to be attentive to time conflicts among team members. Additionally, content deliverers who also serve as the Team Leaders must be mindful and actively engaged in the individual team member performance. The content deliverers must ensure that a) co-enrolled students do not experience burn-out nor do they over-step their assigned team roles, and b) students who are struggling do not fall irreparably behind or, if necessary, they may be reassigned as is consistent with real-world Agile implementations. 6. REFERENCES Podeschi, R. (2016). Building I.S. Professionals through a Real-World Client Project in a Database Application Development Course.
{"Source-Url": "http://proc.iscap.info/2016/pdf/4034.pdf", "len_cl100k_base": 5077, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 27396, "total-output-tokens": 5719, "length": "2e12", "weborganizer": {"__label__adult": 0.0009622573852539062, "__label__art_design": 0.0014657974243164062, "__label__crime_law": 0.0009622573852539062, "__label__education_jobs": 0.37890625, "__label__entertainment": 0.00020742416381835935, "__label__fashion_beauty": 0.0005726814270019531, "__label__finance_business": 0.00189208984375, "__label__food_dining": 0.0013113021850585938, "__label__games": 0.0012559890747070312, "__label__hardware": 0.0012865066528320312, "__label__health": 0.0014638900756835938, "__label__history": 0.000957489013671875, "__label__home_hobbies": 0.000453948974609375, "__label__industrial": 0.0013799667358398438, "__label__literature": 0.0010538101196289062, "__label__politics": 0.0006709098815917969, "__label__religion": 0.0012302398681640625, "__label__science_tech": 0.01428985595703125, "__label__social_life": 0.0005164146423339844, "__label__software": 0.007793426513671875, "__label__software_dev": 0.57763671875, "__label__sports_fitness": 0.0009775161743164062, "__label__transportation": 0.0020294189453125, "__label__travel": 0.0008473396301269531}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28804, 0.01356]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28804, 0.39149]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28804, 0.96154]], "google_gemma-3-12b-it_contains_pii": [[0, 2920, false], [2920, 7604, null], [7604, 10426, null], [10426, 14889, null], [14889, 19396, null], [19396, 24049, null], [24049, 28472, null], [28472, 28804, null], [28804, 28804, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2920, true], [2920, 7604, null], [7604, 10426, null], [10426, 14889, null], [14889, 19396, null], [19396, 24049, null], [24049, 28472, null], [28472, 28804, null], [28804, 28804, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28804, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28804, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28804, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28804, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28804, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28804, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28804, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28804, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28804, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28804, null]], "pdf_page_numbers": [[0, 2920, 1], [2920, 7604, 2], [7604, 10426, 3], [10426, 14889, 4], [14889, 19396, 5], [19396, 24049, 6], [24049, 28472, 7], [28472, 28804, 8], [28804, 28804, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28804, 0.0]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
3577c258120fa9e1f6d4e808f120df45ef2cfb36
Scenario-Based Design of Cooperative Systems Re-designing an Hospital Information System in Denmark JAKOB E. BARDRAM Computer Science Department, Aarhus University, Ny Munkegade 116, Bldg. 540, DK-8000 Århus C., Denmark E-mail: bardram@daimi.au.dk Abstract Over the past few years, scenario-based design has attained a growing interest as a way to incorporate a focus on the future use of an application into the construction of software. Scenarios have, however, mostly been used in the design of user-interfaces and hence focused on single-user situations. Based on experiences from applying scenarios in the re-design of an Hospital Information System in the Danish healthcare sector, this paper describes how collaborative scenarios can be used in the design of cooperative computer systems and what such collaborative scenarios should contain. The paper concludes that such scenarios were useful in bridging the gap between understanding collaborative work practices and designing collaborative computer systems. Key words: scenario-based design, analysis patterns, computer supported cooperative work, Hospital Information Systems, collaborative scenarios 1. Introduction According to Friedman (1989) the biggest challenge in software development since the 1980s has been to fulfil the needs of the users. According to Winograd (1996) this challenge has in the 1990s been extended to bring design to software development in order to ensure that software really works – not in the traditional software engineering sense of reliability and efficiency, but in the sense that the software works for people in a context. Hence, we would like to work with design requirements for a piece of software that addresses the human activity of using computers for a specific purpose – requirements like easy to learn and use, argument human activities, meet peoples expectations, and become cultural meaningful artifacts. As argued by Carroll (1995) these later requirements are far more difficult to specify and to satisfy. We have little prospect of developing final answers to questions about human activity — and certainly not at the level of detail that would provide specific guidance to designers. Our best course is, therefore, to develop rich and flexible methods and concepts that can incorporate descriptions of users and their current and potential use of a computer system into the very design reasoning about such a system. As a narrative description of what users do and experience when using a computer system, scenarios are such rich description of the human activities that can augment the design of computer systems. The use of scenarios within Human-Computer Interaction has typically been addressing individual users. Jacobsen’s use-cases are, for example, “a sequence of transactions in a system whose task is to yield a result of measurable value to an individual actor of the system” (Jacobson 1992, my emphasis). A focus solely on the individual use of a computer system is however too narrow to reveal the conditions for developing and using computer systems, and might even contradict the purpose of contextualizing the design process by applying scenarios. When designing computer support for cooperative work (CSCW) it certainly becomes important to address the collaborative activities at a workplace. The aim of the present paper is to outline such collaborative scenarios. These scenarios were designed and used as a part of redesigning an Hospital Information System in Denmark. The outset for using scenarios in this project originated in the work done in the EuroCoOp and EuroCODE projects at Aarhus University (Bødker et al. 1993; Grønbæk et al., 1995; Kyng, 1995). 2. Scenario-based design Scenario-based design is useful in situations where the design of the system is fragile in the sense that there is no detailed conception of exactly which work activities should be supported and in which way. Such projects are characterized by high uncertainty and risk, and therefore have to adapt an experimental and iterative way of design (Boehm, 1988). For the purpose of the discussion in this paper there are two central characteristics of such a design process. First, a design process is characterized by re-designing existing ways of doing things, which forms the basis for an understanding of how it can be done differently using a computer system. Today, the existing way of doing things already often involves some kind of computer support, which has limitation in its capability of supporting the ever changing work practices. A re-design necessarily has to start by investigating the problems and benefits of the existing system. Second, design is a creative activity that cannot be fully reduced to standard steps. However, a creative process is aided by inspiration, which to a large part comes from looking at the context of future use. Hence, creative design ideas emerge in the meeting between the computer professional, drawing on his technological knowledge, and the user, drawing on his or hers knowledge about the work-practices and the organizational setting (see e.g. Bødker and Christiansen, 1994). However, relying on creative ideas to emerge in the juxtaposition of the designers’ and users’ knowledge in a diffuse high risk design process creates problems of on the one hand to guide the creativity in “the right direction”, and on the other hand to decide whether the emerging ideas are so good and creative after all. Hence, in a design process we want to be able to answer questions like; are these design ideas useful, i.e. what kind of work activities do they support and which one do they disturb? How will these design ideas and the system in general fit into the existing organizational context, and how will this context be changed by the system – for good or for worse? How will the system integrate with work practices and instruments that remain unchanged? In these kind of design situations, the benefits of using scenarios are twofold: on the one hand they are vehicles for supporting the creative meeting between designers and users, and on the other hand they help answer the questions on the usefulness of a system compared to the work practices within an organization. Let us consider what scenarios aimed at describing collaborative work activities should entail. 2.1. Collaborative scenarios The purpose of collaborative scenarios is to provide support for the overall design of a computer system by describing collaborative work activities that are to be supported and/or affected by the future computer system. Such scenarios are work-driven, open-ended and informal narratives of what people do and experience as they try to perform different activities with or without making use of a computer application. Despite their popularity, there is no general accepted definition of what a scenario is, what it should entail, or how it should be used – even the inclusion of the “computer” in scenarios is not always taken for granted (see e.g. the definition in Karat (1995)). However, the definition provided by Carroll (1995) makes a good starting point for discussing what a collaborative scenario should cover: “The defining property of a scenario is that it projects a concrete description of activities that the user engages in when performing a specific task, a description sufficiently detailed so that design implications can be inferred and reasoned about. Using scenarios in system development helps keep the future use of the envisioned system in view as the system is designed and implemented; it makes use concrete – which makes it easier to discuss use and to design use.” (p. 4). This broad definition of a scenario however raises both interesting and difficult questions: First, system development has always been guided by descriptions of potential new ways of supporting and enhancing work by computer technology. So what is new in using scenarios and how does they differ from traditional requirement specifications? Second, what is meant by “a concrete description of activities”? What is meant by an activity? How concrete should the description be? What should this description contain? Third, what is important to write down in a scenario so that “design implications can be inferred and reasoned about”? What kind of implications are we talking about? The kind of implications that we would like the system to have or the unwanted kind of implications that just seems to come anyway? What is the role of the computer system in the scenarios describing activities? Finally, how can a scenario, as a narrative description on a piece of paper, “envision” a future use situation that is not even quite envisioned by the designer, let alone the user? And what is meant by “discuss use” and “design use” – with whom should we discuss and design use? Now these are general and far-reaching questions and the scope of this paper clearly do not allow a detailed discussion of all of them. Therefore, I shall concentrate on the second and third question and shortly comment on the last one. Answers to the first question has been discussed extensively by the different authors in the book “Scenario Based Design” edited by Carroll (1995) and in different paper in the Journal on Human-Computer Interaction (e.g. Bürkle et al. 1995). 3. The SAIK Project: Computer support for coordinating medical work The SAIK project was launched as an experimental pilot project at Kommunedata in an attempt to redesign a national-wide Hospital Information System called the Green System (GS). Currently GS is a large mainframe-based information system used by most hospitals in Denmark. The aim was to redesign GS into a client-server architecture, preserving the mainframe technology as a database server but building PC-based client applications dedicated to support the work at different departments within an hospital – e.g. at the emergency and casualty department, at a medical ward, and at a surgical planning office. One of the main problems within Danish hospitals today is coordinating the treatment made in the different departments. The purpose of the SAIK project was to investigate how coordination and planning of patient care happens today – both with and without computer support – and based on these investigations to reveal how this coordination can be supported by computer technology. The Patient Scheduler is a prototype that illustrates how the coordination of healthcare work can be coordinated by computers. 3.1. Methods and scope of the investigations The SAIK project took place over a period of two years, involving 5 different hospitals in Denmark. The project had two main strands: ethnographic inspired workplace studies of the cooperative nature of work within hospitals, and a participatory design process developing the Patient Scheduler. The workplace studies and preliminary data analysis were based on qualitative methods such as qualitative interviews; participative observations of daily work at the ward, meetings and conferences (cf. Patton, 1990); and studies of different documents, records and other tools (Jordan, 1996). Field studies were made in 5 hospitals. Two of these hospitals were incorporated in the participatory design process of the Patient Scheduler applying methods such as future workshops (Kensing and Madsen, 1991), cooperative prototyping session (Bødker and Grønbæk, 1991) and organizational prototyping (Bardram, 1996). 4. Design documentation Now let us turn to a description of the different design documents used to sustain the experiences obtained during all these activities. Figure 3 contains a summary of the documents. Please note that when using the word “document” we do not solely mean written documents; documentation in the form of photographs, video, rich pictures, process flow-charts, and photocopies of different paper-based forms, documents, work protocols, etc. were central parts of the documentation. The documentation can be divided into two broad categories: (i) organizational overviews, providing a description of the organizational context of the collaborative work-processes; and (ii) work activity scenarios, which are scenarios trying to capture the collaborative work, which we are designing for. 4.1. Organizational overviews An insight into the organization where the future computer system has to be implemented and where the system development process has to take place is clearly indispensable. Thus several authors stress the need for “getting to know the domain, people and tasks” (Johnson et al., 1995 p. 214) and the need for general “work situation overviews” (Kyng, 1995). These kinds of descriptions are essential because a specific work-task scenario is only given meaning from the situation in which it is used. In our project we used four different kind of representation of the hospitals as organizations: Organizational overview (OO). The OO is intended to provide a sufficiently detailed description of the tasks, goals, purposes, and strategies of the organization, the types of jobs and the roles within the organization, how the employee are organized (structure), and the different kind of technology used there. The environment of the organization in terms of competitors, society, labor unions, etc. is part of such an organizational overview as well as necessary descriptions on cultural systems of status, prestige, etc. Person-oriented record (POR). The POR is intended to capture the work practices of a person – both a particular person and a generic job description: the sequence of actions and tasks in the daily round, who they collaborate with, their responsibilities and job descriptions, what they perceived as routine and exceptional work, how they handle exceptions and problems, and so on. Object-oriented record (OOR). The OOR is intended to describe the construction and career of an object, artifact, or document through the system: how is the object created, what does it consists of, what locations does it “visit”, who owns it, what other objects is it depending on and in contact with, who has the right to manipulate it, change it, remove it, and so on. Setting-oriented record (SOR). The SOR chronicles what happens in a particular location through time – throughout a shift, a day, a week, or other relevant temporal cycle to the workplace in question. Many kinds of work activities are spatially distributed and the SOR is intended to capture the work taking place in these separate locations. 4.2. Work activity scenarios In a system development project the organizational overviews are typically made once and for all. In contrast hereto the Work Activity Scenarios (WAS) are alive during the whole system development process. They are constantly modified and rewritten according to new understandings of the work practice and according to the evolving design of the computer system. Hence, we maintain two sets of activity scenarios: one set of scenarios of current work activities and one set of scenarios of the envisioned future work activities. This might sound as a lot, but often the introduction of a computer system might not A detailed analysis of one collaborative work activity entails asking: - what is done and why – analyzing the product and purpose of the activity from the point of view of the organization and the point of view of the different groups of involved actors - what sub-actions are part of the activity – analyzing the detailed flow of distributed tasks of all participants - how are these sub-actions realized using different tools and artifacts – analyzing the role of artifacts and their mediating and coordinating role, and how these artifacts interrelate - who are responsible for these sub-actions – analyzing the division of work - where and when are these sub-actions done – analyzing the spatial and temporal arrangement of the activity, crossing departmental, organizational or geographical boundaries to encompass all the actors involved A detailed analysis of several interdependent collaborative activities involves asking: - how is the routinely flow of work continuously coordinated in terms of the three basic types of coordination: (i) communicative coordination, where the actors coordinate through signs and language, (ii) instrumental coordination, where each actor coordinates his activity according to the activities of others, and (iii) scripted coordination, where each actor coordinates his activity according to a script for action, e.g., a checklist, a plan or a schedule. - what kind of collaborative breakdowns happens in the daily flow of work, looking at: - when and how the routinely coordinated work collapses caused by necessary accommodation to unforeseen constraints in the working situation and how the coordinated flow of work is reestablished through a mutual cooperative effort. - when and how cooperative work breaks down caused by conflicting motives and goals and how this situation is handled through rethinking and co-constructing of the activity system. - how are the different activities interdependent in terms of the three general types of interdependencies: the need for (i) simultaneous activities, (ii) sequential activities, and (iii) shared resources in activities. - what are the contradictions and conflict within and between existing work activities and between existing work activities and potential new ones supported by the computer technology. *Figure 1. A checklist for creating collaborative scenarios.* change much in the overall activity system, and if it does, such changes has to be considered and described anyway. WASs are scenarios that detail the activities necessary to get a particular task or process within the total scope of work done. The WAS has a unique name in order to facilitate communication among designers, users, stakeholders, etc. A WAS describe the recurring, regular features of typical tasks and how they relate to the organizational context and to the physical setting, facilities and persons at the workplace. The scenarios are non-technical and encompass both individual as well as collaborative work tasks. They have the purpose of requirement analysis, environment for the overall design decisions, and provide the basis for all further scenarios. The future WASs are also used for implementation and training. A WAS is produced by workplace studies and participatory design techniques. Figure 1 shows a checklist of aspects of collaborative work that a collaborative work scenario should address. This checklist has been compiled from Activity Theory as a framework for design of CSCW systems (see Bardram, 1998), and the insights from our workplace studies and from numerous other workplace studies done within CSCW (for an overview of some of the findings see Plowman et al., (1995) and Grinter (1997)). Some work activities are central to the (re-)design of a computer system and hence need to be analyzed in greater detail. For this purpose Analytical Scenarios (AS) can be made. An example of an analytical scenario is illustrated in figure 2. The analytical scenario describes in detail what is happening, where and when, by who and why, and how both today and potentially in the future supported by a computer system. Relevant information from the organizational overviews are included (e.g. the description of the responsibility of the head radiologist) and the underlining are references to other description (e.g. the SOR describing the offices). The last column (the “How – Patient Scheduler”) is added later when the design is evolving and illustrated partly by prototypes or mock-ups. Even if the design obliterates some subtasks, these are kept in the analytical scenario as a reminder of how the future system will change and potentially enhance work. The sentences in italic are used to comments for further action in the design of the computer system – e.g. there might be a serious problem in not supporting the central task of prioritizing incoming requisitions. In this case we have a contradiction between the current and the future scenario as supported with the current design of the prototype. 4.3. Activity maps As a way of providing an overview of all interdependent and contradicting activities activity maps were drawn. These maps were merely a graph with activities as nodes and relations in term of interdependencies and/or contradiction as arcs. Activity maps were drawn both of the current work situation and of the envisioned future work situation. 5. Applying scenarios in the SAIK software design process In the SAIK project an evolving set of scenarios constituted a backbone, tying together the many activities in the system development lifecycle. This is in contrast to authors advocating the use of scenarios only for workplace descriptions and initial requirement specification (Anderson and Durney, 1992; Hsia et al., 1994) or for evaluation (Nielsen, 1995). Central to our approach is to use scenarios for describing existing work situations and then use these to help generate a system solution and for continuous verification of the design through experimentation. In the SAIK project collaborative scenarios played three overall purposes: (i) continuous analysis and design documentation, (ii) validation of design solutions and experimentation with prototypes, and (iii) generalization of experiences in order to re-use design insights and solutions in other design projects. 5.1 Scenarios as the fulcrum in the design process In the SAIK project we operated with a design process consisting of three activities: (i) exploration, (ii) design, and (iii) experimentation. We alternated between these activities in an iterative way, trying to use the experience obtained in one activity as an input for the other activities. In the exploration activity the necessary insight into the overall socio-economical and organizational context was initiated. Understanding the Danish hospital sector and its political and economical nature was of central importance to the SAIK project and this exploration was hence never terminated, but continued throughout the whole project. This organizational insight was documented in the organizational overviews. The work activity scenarios of existing ways of doing work were also created in this activity. In the SAIK project central work processes for collaboration and communication at the hospital was described. Because the Patient Scheduler was aiming at supporting the cooperation across departmental boundaries, we wrote scenarios concerning the requisition of radiology examinations, the collaboration among physicians at different conferences, the planning and booking of examinations at the radiology department, etc. Subsequently, analytical scenarios were made for these central work processes (see e.g. figure 2). However, scenarios “at the border” of such central activities for the Patient Scheduler were made as well, e.g. the way medication was given at the ward, and how the physician was using the medical record. The organizational overviews and the work activity scenarios were compiled into the activity maps. These maps were in practice the walls of our offices. Scenarios, pictures, screen-dumps, and description of different artifacts (mostly documents and forms) used at the workplace were put on large bulletin boards, and the connections between all this were maintained by red yarn and post-its notes. At some point concurrent with the exploration activity, the design activity is initiated. This is a creative process of generating ideas for computer support, which is guided by the obtained insight in the exploration activity. Design decisions are facilitated by the different work activity scenarios, which point to issues in the current way of working that need to be considered. For example, the scenario describing the activity of scheduling examinations at the radiology department shown in figure 2 pointed to the need of supporting the sorting of incoming requisitions. This design decision was subsequently evolving into support for setting up some kind of automatic filtering according to sender, type of request, etc. For each of the work processes, that we were trying to support, future scenarios were used to document how the computer system might enhance, change, or obliterate existing work activities. These future scenarios are changed, up-dated and used throughout the whole design and construction phase of the computer system. For example, in the SAIK-project it was decided that the Patient Scheduler should support “both ends of the collaboration” – i.e. that it should support both receiving and sending requests for work at other departments. Hence, future scenarios for both the work at wards and at radiology and other service departments were written. Furthermore, a design solution allowing the physician at the ward to book examinations at radiology on his own was made. This would save both the physician and the secretaries at radiology a lot of work. However, this was a radical new solution to the communication between wards and radiology, and several future scenarios were made to envision how this would be possible. 5.2. Design experimentation and confrontation A central part of an iterative design process is to make experiments in order to clarify the overall design of the system and to investigate the qualitative aspect of usability, acceptability, and suitability within the target domain(s). For this purpose we operated with four kind of design confrontations: (i) validation, (ii) logical confrontation, (iii) use confrontation, and (iv) organizational confrontation. Validation is a confrontation between the understanding obtained during the exploration activity as documented in the organizational and work-oriented descriptions and scenarios. In other words, it is a validation of the correctness of the obtained descriptions by discussing the scenarios with the users. This validation is of crucial importance when the scenarios are to be used for further development and design. In the SAIK project this validation was achieved by reviews of documents and video analysis, and in workshops with different employees who has participated in the exploration phase. A logical confrontation happens between a proposed design and the analytical scenarios. The confrontation aims at pinpointing the potential opportunities and risks of the future system according to the way work is done today. The confrontation is called logical because it is a systematic comparison of a proposed design with the knowledge about the work practice of today. Two examples of logical confrontations are illustrated in the analytical scenario in figure 2 (shown in italic). These confrontations reveal problems of connecting the Patient Scheduler to EDIFACT messages coming from outside the hospital, and problems of supporting the prioritizing of requisitions. Thus, these confrontations pointed to potential risks in the overall design. A use confrontation happens between a proposed design as documented by the future scenarios and the future users of the computer system. In the SAIK project we enacted the future scenarios together with the users, using different prototypes illustrating the future system. This confrontation aims at revealing the use-characteristics of the system, potential problems and opportunities for further design. The future scenarios were changed and enriched together with suggested changes to the prototypes. An organizational confrontation happens between a proposed design – either illustrated by a prototype or by the final system – and the organizational context of the new system. The aim is to reveal how the computer system supports, enhances and changes the working of the organization as a whole. Thus the system has to be evaluated and confronted with work practices, the organizational structure and culture, related technology, resource constrains, spatial arrangements in the workplace, etc. In the SAIK-project this kind of confrontation was made in different workshops with managerial representatives. In one of these workshops the design solution of having physicians directly book radiology examinations on their own was discussed and found highly problematic from the radiology departments point of view. Several problems with the solution were revealed, ranging from the fact that a physician cannot book all radiology examinations without advice from a radiologist, to more economical issues of how the radiology department can control their expenses if everybody were granted access to book on their own. Hence, there was a need for designing a solution for keeping radiology in control. This involved creating an access mechanism, so that radiology could decide exactly what kind of examinations could be booked by who, when, how, etc. 5.3. Analysis patterns – generalizing design knowledge In the SAIK project, the analytical scenario has two distinct purposes in the system development lifecycle: (a) as detailed task analysis of work practices of central importance to the design, and (b) as generalizations of experiences from the different hospitals involved in the project. The last purpose must be viewed in light of the overall aim of the SAIK 247 project to design a system that not only should be used at the hospitals involved in the design process, but potentially at all hospitals in Denmark. When deviations were discovered they were kept in the scenario – for example the sentence in figure 2 marked with “ÅKH.” is an observation made only at this hospital. The scenarios made during our investigations helped us on the one hand to identify and sustain the differences in work practices across different departments and hospitals. On the other hand the scenarios captured aspects of collaborative work that were stable and similar across different work settings, and they could be generalized into generic scenarios for different types of activities within an hospital. Examples of such generic types of activities are the paper-based requisition of radiology examinations and the scheduling and sorting of incoming requisitions. These generic scenarios provided the background for extracting the general design knowledge embedded in the Patient Scheduler as Analysis Patterns, which could be reused in other projects at Kommunedata. In contrast to Design Patterns (Gamma et al. 1995), an analysis pattern is a solution to a recurrent problem within an organizational context, not within construction of software. An analysis pattern is an object-oriented solution that represents a common construction in some business modeling – in this case within hospitals as an organization (see also Fowler, 1997). The real-world problem, that each analysis pattern is attempting to solve, is represented as a generic scenario, which work as an inspiration for the analyst in the future. 6. Conclusions The collaborative scenarios, as discussed in this paper, are summarized in figure 3. In conclusion, using collaborative scenarios as the backbone in the design of a cooperative system in the SAIK-project, has been very successful. They provided a necessary tool for analyzing and documenting existing work practices and hence paved the way for generating ideas for new or redesigned computer support for these work practices. But most important, collaborative scenarios worked as important thinking tools for grounding the creative envisioning of how work could be organized using new computer technology. As such imaginary thinking tools helping to answer the question of “given this design proposal, what might be the future use of the system?”; collaborative scenarios were a fundamental cornerstone in the participatory design sessions with the users (see Bardram, 1996). Moreover, scenarios are not “dead” documentation, but are alive throughout the whole design process and provides the basis for later construction of software and the final implementation of computer systems within organizations. In this way, scenarios can mediate an implementation and diffusion process of computer technology within an organization, by translating existing work practices into new ones using the system. Note that an implementation phase influences the creation of the organizational and work-oriented overviews (see figure 3). This is a result of the spiral model where experiences obtained during the phase of implementation – as the process of turning a computer system into technology for an organization – might provide a basis for further redesign or implementation of the system, either in the same organization or in similar organizations (e.g. another hospital). Another, more theoretical, conclusion to be drawn from our use of scenarios is that they provide one solution to bridge “the great divide” within CSCW. This term labels the division between CS (Computer Support) and CW (Cooperative Work), the former focusing on technical innovations, and the latter on social aspects of work. The problem with this division is that neither of the sides have been focusing on the process of getting from the one to the other – i.e. have not been addressing neither the issue of designing computer systems based on understanding cooperative work, nor the issue of implementing computer systems within cooperative work practices. Within the numerous workplace studies made within CSCW it is often argued that one of the main strengths of an ethnographic approach is that detailed analyses of social work can provide rich material on which to base recommendations for the design or re-design of a computer system. However, there is a big distance from having a good understanding of existing work practices to creating design solution for a future computer system, which is intended to change these work practices. Typical design recommendations from such ethnographic workplace studies is enclosed as the classic “implications for design” section at the end of the paper (c.f. also Plowman et al., 1995). In the SAIK-project collaborative scenarios proved to be a good way of both documenting experiences obtained during the workplace studies and at the same time they worked as design tools, helping to bridge the distance between present and future work practices. Moreover, as already emphasized, scenarios are “live documents” that are used active in cooperation with users. In this way, scenarios support a two-way communication between designers and users, where users inform designers about current work practices and designers inform users of potential future computer solutions. Hence, design and implementation is facilitated concurrently. Such a two-way communication process is fundamental distinct from the classical use of workplace studies within CSCW, where the ethnographers are the ones in contact with the real work setting, and they are informing the design process through “debriefing meetings” (Hughes et al., 1994). Acknowledgments The work done in the SAIK-project is funded by the Danish Academy of Technical Sciences, Kommunedata, and the University Hospital for Århus Amt. Part of the workplace studies, the scenarios, and the workshops were made in cooperation with Trine Grundahl, Jens Due Olsen, and Marian Thygesen. We are grateful to all personnel at the 5 hospitals for their willingness to have us hang around asking obvious questions and for their participation in the design process. Notes 1. SAIK is a Danish abbreviation for “Collaborative Informatics in Clinical Practice.” References
{"Source-Url": "http://www.daimi.au.dk/~bardram/docs/ScenarioBasedDesign_Bardram.pdf", "len_cl100k_base": 6729, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 28399, "total-output-tokens": 8765, "length": "2e12", "weborganizer": {"__label__adult": 0.0007672309875488281, "__label__art_design": 0.007598876953125, "__label__crime_law": 0.0007276535034179688, "__label__education_jobs": 0.0247955322265625, "__label__entertainment": 0.00015723705291748047, "__label__fashion_beauty": 0.0004303455352783203, "__label__finance_business": 0.00104522705078125, "__label__food_dining": 0.0007886886596679688, "__label__games": 0.0009851455688476562, "__label__hardware": 0.00331878662109375, "__label__health": 0.006549835205078125, "__label__history": 0.000957012176513672, "__label__home_hobbies": 0.000255584716796875, "__label__industrial": 0.0012264251708984375, "__label__literature": 0.00103759765625, "__label__politics": 0.0003609657287597656, "__label__religion": 0.0011110305786132812, "__label__science_tech": 0.264404296875, "__label__social_life": 0.00024068355560302737, "__label__software": 0.024169921875, "__label__software_dev": 0.6572265625, "__label__sports_fitness": 0.0004627704620361328, "__label__transportation": 0.0008907318115234375, "__label__travel": 0.00036454200744628906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40283, 0.02207]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40283, 0.67035]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40283, 0.93867]], "google_gemma-3-12b-it_contains_pii": [[0, 2635, false], [2635, 6231, null], [6231, 9329, null], [9329, 12279, null], [12279, 15175, null], [15175, 18565, null], [18565, 21504, null], [21504, 25100, null], [25100, 25894, null], [25894, 29340, null], [29340, 32765, null], [32765, 33217, null], [33217, 36006, null], [36006, 40283, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2635, true], [2635, 6231, null], [6231, 9329, null], [9329, 12279, null], [12279, 15175, null], [15175, 18565, null], [18565, 21504, null], [21504, 25100, null], [25100, 25894, null], [25894, 29340, null], [29340, 32765, null], [32765, 33217, null], [33217, 36006, null], [36006, 40283, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40283, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40283, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40283, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40283, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40283, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40283, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40283, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40283, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40283, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40283, null]], "pdf_page_numbers": [[0, 2635, 1], [2635, 6231, 2], [6231, 9329, 3], [9329, 12279, 4], [12279, 15175, 5], [15175, 18565, 6], [18565, 21504, 7], [21504, 25100, 8], [25100, 25894, 9], [25894, 29340, 10], [29340, 32765, 11], [32765, 33217, 12], [33217, 36006, 13], [36006, 40283, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40283, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
a9a00b795ff8398d990add2044511ccf85e7438e
Self-Healing in Web Services Using Genetic Algorithm Faezeh Yousefianarama a, Eslam Nazemi b* aDepartment of Computer Engineering, Qazvin Branch, Islamic Azad University, Qazvin, Iran bFaculty of Science & Computer Engineering, Shahid Beheshti University, Tehran, Iran * Corresponding author email address: nazemi@sbu.ac.ir Abstract In addition to monitoring, analysis, plan and execution phases in self-healing cycle, represents the knowledge base consumed and produced by all four previously mentioned tasks. In proposed approach, by using genetic algorithm, the required knowledge is prepared for healing operation. Healing operation takes place when the response time of the web service exceeds its threshold. In this case, using genetic algorithm, healing sequence is created to save response time and even to reach optimum state. Healing sequence causes the transition of service-oriented systems from degraded state to healthy state as well as healing the error in web service and in this case lost time is recovered. To make healing sequence, healing approaches such as substitution, replication and skip is used which not only prevents process operation to be in non-response state but also results in optimization of response time. To provide healing sequence, execution of proposed plan benefits from consequent web services and is able to reduce response time and show the saved time. Keywords: Self-healing, Web service, Response time, QoS 1. Introduction A self-healing system should recover from the abnormal or unhealthy state and return to the normative healthy state, and function as it was prior to disruption (Harald and Schahram, 2010). The specification to which a system has been built is usually not fully known to those who maintain it. It is difficult to draw a discrete difference between healthy and unhealthy states of a system as the transition in between the two states is not abrupt. What generally obtains is a gradual transition from one state to another (Debanjan et al., 2006). Also, because web services are dynamic and unpredictable, one of their big challenges - quality of service - is in attention. Response time is one of the qualities of service parameters of which its increment is in inverse proportion to system performance. In degraded transition, response time increases continuously until it is not able to respond. There are several approaches that have been developed to achieve self-healing service-oriented systems. Some approaches implement self-healing after the detection of failures. When an error occurs, it stops currently running services and repairs or replaces the malfunctioned ones. A major issue with these reactive approaches is that they cause long disruptions of currently running service systems, which in most cases will incur high revenue cost or risk to lose a large number of customers (Hongbing et al., 2009). In proposed approach, to diagnose the probable states of the system such as error and failure, an index is considered by which we can examine the quality of response time and can make healing sequence to continue their operation using proposed healing policies. In Section 2, the related work is presented. In Section 3, we try to present proposed approach architecture in web services which named as Healing Sequence Creation Algorithm (HSCA). After that a brief description of implementation is shown and in Section 5 the results of execution of sample web services are given. In last section, we discuss about the results of Insurance Registration Example as a case study. 2. Related works In (Yu et al., 2009), a self-healing approach is an integration of backing up in the selection and reselecting in the execution. In (Poonguzhali et al., 2011), a self healing approach which substitutes the alternative web service which provides the same service as that of failed service by considering the interrelationship between the component web services. In (Aziz et al., 2012), authors proposed a QoS-driven transactional service reselection model for reliable replacement. In (Ying et al., 2010), this paper proposed a T-QoS service selection model, a self-healing replacement model and designed related simulated environment. In (May and Judith, 2009), having a full understanding of the interaction between different modules in a self-healing cycle provides the designer of a composition with the knowledge necessary to build more effective self-healing systems with minimum runtime overhead. Table 1 shows self-healing approaches in each work. Most of the works evolved so far for self-healing in web services for QoS based substitute the replica of original service. If the replicated one is not available then reselection of web services will go for execution without considering the interrelationship between the web services. Our approach makes healing sequence for web services. Table 1, shows the comparative study of existing and proposed works. <table> <thead> <tr> <th>Self-healing approach</th> <th>Detection phase</th> <th>Self-healing Phases</th> <th>Recovery phase</th> </tr> </thead> <tbody> <tr> <td>Dai Y, Yang L, Zhang B, (Yu et al., 2009)</td> <td>Monitor the QoS-related context</td> <td>QoS of certain service is predicted to be a large deviation</td> <td>Integration of backing up in the execution</td> </tr> <tr> <td>S.Poonguzhali et al, (Poonguzhali et al., 2011)</td> <td>QoS monitoring based on BPEL activity</td> <td>Based average value of QoS parameters</td> <td>Provides alternate service with the consideration of partner links</td> </tr> <tr> <td>Aziz Nasridinov, Jeong-Yong Byun, Young-Ho Park, (Aziz et al., 2012)</td> <td>Monitoring to extract information about the system health</td> <td>Identify the QoS degradation</td> <td>Reselect failed service</td> </tr> <tr> <td>Ying Yin, Bin Zhang, Xiize Zhang, (Ying et al., 2010)</td> <td>Monitors the quality of component services</td> <td>Violating current status of the execution from the execution plan</td> <td>Replacement module</td> </tr> <tr> <td>Proposed approach</td> <td>Monitoring on the web services response time</td> <td>Diagnosing errors in web services response time</td> <td>Using Replication, Substitution, Skip</td> </tr> </tbody> </table> 3. Architecture of the Proposed Approach We named the proposed approach Healing Sequence Creation Algorithm (HSCA). HSCA consists of two phases, namely, monitoring phase and execution phase. The importance of monitoring phase is due to recognizing the response status of web services. Web services in execution environments and in heavy load undergo various states in their quality of response time. Similar requests at the same time are sent to a web service and some time leads to heavy load. In this case, the web service may not be able to respond at the proper time. So, to prevent this state and probability of not responding, we will call HSCA which is shown in Fig. 1. ![Fig. 1. Architecture of HSCA.](Image) 3.2 Monitoring Phase In Fig. 1 monitoring tools frequently control the status of web services by the feature of response time. When response time of a web service exceeds the determined threshold time, we can use a proper healing approach for the web service which has lost normal time to its respond. 3.3 Executive Phase During this phase, drop in the quality of response time of a web service, or in other words increment of response time in a web service which is too expected, can make error in the cycle of self-healing state. When response time of system exceeds the specified threshold time, system sets in error state. Fig. 2 represents the cycle of healthy, error and failure states in the system. ![Fig. 2. Cycle of normal, error and failure states in web service.](Image) As shown in Fig. 2, when response time of a web service exceeds the threshold time (error state), we have to find a solution for it; because some factors are slowing the system down and forcing it to failure state. In this state, web service is not able to respond anymore or it responds as too late as is not acceptable. After recognizing the error state in response time, HSCA tries to bring back the system from degraded state to healthy state. Table 2 shows the status of a web service from the viewpoint of response. <table> <thead> <tr> <th>Index of response quality.</th> </tr> </thead> <tbody> <tr> <td>QoS</td> </tr> <tr> <td>Responsible</td> </tr> <tr> <td>Exceeded from threshold time</td> </tr> </tbody> </table> No response | Failure ---|--- To overcome the issues mentioned in Section 2, we proposed a self-healing approach which creates a healing sequence when each web service is in error state. Depending on the type of web service operation and system administrators’ opinion, healing sequence uses these healing policies: i. Web service Substitution: with this operation web service is substituted with corresponding web service. ii. Web service Replication: instead of using one sample of web service, several samples of parallel web services are activated and heavy load are divided between them. iii. Skip from Web service: by this operation calling a web service is ignored and a web service which is located immediately next to it in workflow is called. In the policy of web service substitution, substitute web service exists in the knowledgebase and makes overhead. Using web service replication policy, several parallel web services do the processing operation of primary web service and compensate the lost time. The policy of skip from web service is applicable when halting a web service does not damage the sequence of execution of web services. Healing policies in HSCA will use search methods, by genetic algorithm. Genetic algorithm using healing policies suggests a healing sequence that tries to save response time of web services. In Fig. 3, it is shown that when the response time of a web service exceeds its threshold time, the HSCA is called and healing sequence is made. 4. Implementation To evaluate HSCA, a tool designed in which the response time of the web service, threshold time, substitute web services and maximum of web services that can execute same as parallel stored in knowledge base. In this tool, when the response time of a web service exceeds the threshold time and web service sets in error state, the HSCA is called. In addition mentioned web service, HSCA suggests a healing sequence for the rest of web services. By using healing sequence, not only response time isn't lost but also shorter response time is obtained. If next web services will be in the error state, then HSCA is called again and will create healing sequence. HSCA cannot be left unanswered and process operation of sequential web services to be stopped. Also, we can use the proposed algorithm same as case using. In this method, proposed algorithm is only used for the web service that it’s response time has exceeds the threshold time. Fig. 5 is an example of sequential execution of nine web services that shows a comparison between two calling states of proposed algorithm. HSCA compared with the case using obtains to shorter response time. In Fig. 4, it is shown that when the response time of a web service exceeds its threshold time, the HSCA is called and healing sequence is made. Fig. 5 shows that the success rate of HSCA is always better than of case using. 5. Discussion This part will verify the effectiveness of the HSCA. 5.1 Evaluation factors There are many factors related to the self-healing of web services concept. We consider three factors for the purpose of comparison: \[ \text{Low.QOS\%} = \frac{N}{N_T} \times 100 = \frac{3}{9} \times 100 = 33. \] N is the number of web services that have exceeded from their threshold time (the number of callings of genetic algorithm) and N_T equals the total number of web services. \[ \text{High.QOS\%} = \frac{N}{N_T} \times 100 = \frac{6}{9} \times 100 = 66. \] N is the number of web services that have processed in their response time and N_T equals the total number of web services. Availability \% = 1 - \frac{\text{overload time}}{\text{consumed total time}} = 1 - \frac{0.38032}{0.901532} = 0.993127 The overload time is caused by calling HSCA. In other words, represents the time in which no web service is available and at that time no processing operation is done. 5.2 Comparison As it is shown in Table 3, response time was reduced by about 0.34. Also, shows the comparison between the consumed times without self-healing approach, using self-healing with case using of proposed algorithm and using self-healing with HSCA. <table> <thead> <tr> <th>Situation</th> <th>Consumed time (ms)</th> </tr> </thead> <tbody> <tr> <td>Without self-healing approach</td> <td>291.3978</td> </tr> <tr> <td>Using self-healing with case using of proposed algorithm</td> <td>189.588</td> </tr> <tr> <td>Using self-healing with HSCA</td> <td>101.0756</td> </tr> </tbody> </table> Table 4 Comparison of existing and proposed works. <table> <thead> <tr> <th>Self-healing approaches</th> <th>Evaluated features</th> <th>Proposed solution</th> <th>Saved time</th> </tr> </thead> <tbody> <tr> <td>Yu. Dai</td> <td>Response time, cost</td> <td>Integration of backing up in the selection and reselecting in the execution</td> <td>Not specified</td> </tr> <tr> <td>S.Poonguzhali</td> <td>Response time</td> <td>Selecting the substitute web service</td> <td>no</td> </tr> <tr> <td>Proposed solution</td> <td>Response time</td> <td>Mixing the healing policy</td> <td>yes</td> </tr> </tbody> </table> 6. Case Study The insurance registration example will present in this section to demonstrate the usage of the proposed self-healing cycle in solving the problem identified. Fig. 6 depicts the internal operations of insurance registration. 6.1 Insurance Registration Example The process is initiated with a user logging into the system (Registration Form). The system then performs the necessary authentication on the account and delivers a primary tracking code (Account & Create Code). In order to next tracking, a tracking code is needed. But response time of this web service exceeds the threshold time. HSCA calls and suggests replication policy. Tracking code creates and sets the type of insurance for example fire, vehicle, age, travel and etc (Check type of request). Based on the results returned, an identity code for type of insurance invokes (Create identity). Also updating the company regulations is needed but web service suddenly gets problems. Therefore, the HSCA calls again and suggests replication policy (Update rules). In parallel to these two actions, results of request will also be checked (Accept rules). At this stage, the user can choose to Accepts or Not accepts. ![Fig.6. The insurance registration business process.](image-url) If not accept selected, sends an email to the user. In case of accepting this inquiry, estimation of costs and payments is needed (Calculate insurance cost). Also web service is not able to respond in specified time. Therefore, HSCA is called and using replication policy. Table 5 Process of insurance request. <table> <thead> <tr> <th>Web Service Name</th> <th>Healing Policies</th> <th>Output</th> <th>Execution time(ms)</th> </tr> </thead> <tbody> <tr> <td>Registration Form</td> <td>Nothing</td> <td>Create Code</td> <td>26.7564</td> </tr> <tr> <td>Account &amp; Create Code</td> <td>Substitution</td> <td>Check type of request</td> <td>16.0547</td> </tr> <tr> <td>Check type of request</td> <td>Substitution</td> <td>Create identity</td> <td>16.4271</td> </tr> <tr> <td>Create identity</td> <td>Replication</td> <td>Update rules</td> <td>5.4333</td> </tr> <tr> <td>Update rules</td> <td>Replication</td> <td>Accept rules</td> <td>4.7577</td> </tr> <tr> <td>Accept rules</td> <td>Nothing</td> <td>Calculate insurance cost</td> <td>23.9283</td> </tr> <tr> <td>Calculate insurance cost</td> <td>Replication</td> <td>Accept cost</td> <td>4.9282</td> </tr> <tr> <td>Accept cost</td> <td>Replication</td> <td>Register Insurance Identification</td> <td>3.492</td> </tr> <tr> <td>Register Insurance Identification</td> <td>Replication</td> <td>Finish</td> <td>4.4214</td> </tr> </tbody> </table> 6.2 Possible Violation Points Because a Web service lives in a dynamic environment, any unexpected changes to a service could potentially lead to a fault. In Insurance Registration Example we can identify three Possible Violation Points, namely Account & Create Code, Update rules, Accept rules. These are explained in Table 5. Solving these violations will be demonstrated using the proposed self-healing composition cycle. 7. Conclusion In this paper, response time values degradations are detected and repair action provided high availability. Also, in order to save the response time of web services, we proposed Healing Sequence Creation Algorithm (HSCA) using genetic algorithm, with replication, substitution and skip policies. Using this approach, we could save the response time of web services that their response time has exceeded the threshold time in runtime. In the future work, we will add online planning to HSCA. In such a way, we will evaluate healing sequences by numerical values and use them for comparison and reuse. References In the next stage, user confirms estimated cost and pays (Accept cost). Identity code registers and user can see them for next references (Register Insurance Identification). Author Biographies **Faezeh Yousefian** was born in Tehran, Iran. She received her BSc. degree in Islamic Azad University of Kashan in software engineering. She is MSc student in Islamic Azad University of Qazvin, in Computer Engineering. **Eslam Nazemi**, PhD Assistant Professor, Faculty of Science & Computer engineering, Shahid behshti University. Eslam Nazemi was born in Sarab, Iran, in 1954. He got the BSc. degree in Applied Mathematics and Operational Research from School of Planning and Computer Application, Tehran, Iran in 1977, The MSc degree in Both System Engineering and Economics in 1987 and 1996, and PhD in Industrial engineering and Information technology in 2005, Iran. He was the faculty Member from 1978 in School of Planning and Computer application and then from 1986 to the present, he has been with the Electrical and Computer engineering Faculty and then Faculty of Science & Compute Eng. at Shahid Beheshti University (SBU), Tehran, Iran. He was deputy of graduate and education affairs and now is the manager of informatics development of education in SBU. He is an Assistant Professor of Computer Engineering Department. His main fields of research are Self-* Software Engineering, Large Scale Software Development, Web Mining, and Self-Adaptive Software quality. He has authored and co-authored more than 90 papers in Journals and Conferences and has 10 books on software engineering, Software Quality, game theory, mathematics and project management.
{"Source-Url": "http://www.jscdss.com/index.php/files/article/download/24/pdf_22", "len_cl100k_base": 4333, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 22071, "total-output-tokens": 4681, "length": "2e12", "weborganizer": {"__label__adult": 0.000293731689453125, "__label__art_design": 0.0003314018249511719, "__label__crime_law": 0.00036025047302246094, "__label__education_jobs": 0.0009255409240722656, "__label__entertainment": 7.128715515136719e-05, "__label__fashion_beauty": 0.00015532970428466797, "__label__finance_business": 0.00038909912109375, "__label__food_dining": 0.00032401084899902344, "__label__games": 0.00043582916259765625, "__label__hardware": 0.0007939338684082031, "__label__health": 0.0007529258728027344, "__label__history": 0.00022470951080322263, "__label__home_hobbies": 7.587671279907227e-05, "__label__industrial": 0.0003299713134765625, "__label__literature": 0.00032210350036621094, "__label__politics": 0.00022983551025390625, "__label__religion": 0.0003724098205566406, "__label__science_tech": 0.045318603515625, "__label__social_life": 9.98377799987793e-05, "__label__software": 0.01180267333984375, "__label__software_dev": 0.935546875, "__label__sports_fitness": 0.00023365020751953125, "__label__transportation": 0.00043487548828125, "__label__travel": 0.00018286705017089844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20327, 0.02578]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20327, 0.15778]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20327, 0.92383]], "google_gemma-3-12b-it_contains_pii": [[0, 4265, false], [4265, 8284, null], [8284, 11090, null], [11090, 14743, null], [14743, 19748, null], [19748, 20327, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4265, true], [4265, 8284, null], [8284, 11090, null], [11090, 14743, null], [14743, 19748, null], [19748, 20327, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 20327, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20327, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20327, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20327, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20327, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20327, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20327, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20327, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20327, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20327, null]], "pdf_page_numbers": [[0, 4265, 1], [4265, 8284, 2], [8284, 11090, 3], [11090, 14743, 4], [14743, 19748, 5], [19748, 20327, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20327, 0.28696]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
8d61942fd2f19f5065b402557626dc0155a066d6
Tipo de artículo: Artículos originales Temática: Desarrollo de aplicaciones informáticas Recibido: 03/12/2019 | Aceptado: 20/12/2019 | Publicado: 30/03/2020 QuantityEr: Una solución simple y extensible para obtener la cantidad de resultados de consultas complejas a GitHub QuantityEr: An extensible and simple solution to obtain the amount of results of complex queries to GitHub Ernesto Soto Gómez[0000-0001-6521-2221]* 1Universidad de las Ciencias Informáticas. Carretera a San Antonio de los Baños, Km. 2 ½. Torrens, La Lisa, La Habana, Cuba. esoto@uci.cu *Autor para correspondencia: esoto@uci.cu Resumen GitHub es una plataforma que proporciona alojamiento para el control de versiones de desarrollo de software utilizando Git. Cuenta con una interfaz de programación de aplicaciones para permitir que el software interactúe con la plataforma. La enorme cantidad de información alojada en GitHub puede ser útil para realizar estudios sobre la presencia actual de herramientas de desarrollo en la comunidad de desarrollo de software de código abierto. Sin embargo, el motor de búsqueda posee restricciones que hacen imposible emitir consultas complejas a la plataforma. En este informe, se describe una solución extensible y orientada a objetos, llamada QuantityEr, para obtener la cantidad de resultados de búsqueda de consultas complejas a GitHub utilizando el principio de inclusión-exclusión. Se presentan las definiciones matemáticas y los conceptos relacionados. Se discute el modelo matemático. Se presentan el diseño general de la aplicación y las herramientas de desarrollo utilizadas. Además, son mostrados resultados de ejemplos de ejecución. Se concluye que el problema tratado ha sido resuelto, aunque se puede trabajar para mejorar la solución. Palabras claves: cantidad de resultados de búsqueda, GitHub, principio de inclusión-exclusión, programación orientada a objetos, Python Abstract GitHub is a platform that provides hosting for software development version control using Git. It features an application programming interface to allow the software to interact with the platform. The enormous quantity of information Hosted in GitHub may be useful to make studies about the current presence of development tools in the open-source software development community. However, the search engine has restrictions that make it impossible to issue complex queries to the platform. In this report, it is described as an object-oriented and extensible solution, named QuantityEr, to obtain the number of search results of complex queries to GitHub by using the inclusion-exclusion principle. The mathematical definitions, as well as related concepts, are presented. The mathematical model is discussed. The application of general design and used development tools are presented. Also, the results of the execution examples are showed. It is concluded that the treated problem has been solved although more work may be done to improve the solution. Keywords: search results amount, GitHub, inclusion-exclusion principle, object-oriented programming, Python Facultad de Ingeniería Universidad La Salle, Arequipa, Perú facin.innosoft@ulasalle.edu.pe Introduction GitHub\(^1\) is a platform that provides hosting for software development version control using Git\(^2\). It provides several collaboration features such as bug tracking, feature requests, task management, and wikis for every project. It also features an application programming interface (API) to allow software to interact with the platform\(^3\) [1]. Through this API a search engine can be accessed. The search engine allows users to find almost every single aspect across several projects, source codes and other areas and features of the platform\(^4\) [2]. A web page that serves as an interface to the search API is also available\(^5\). As of August 2019, GitHub reports having over 40 million users and more than 100 million repositories\(^6\). This enormous quantity of information may be useful, among other things, to obtain the number of projects, source codes, issues, etc, that mention a set of technologies, tools, development libraries, etc, in order to make studies about the current presence of these tools in the open source software development community. Other kind of quantitative studies may be done as well [3]. Examples of those kinds of research are [4–7]. However, the search engine has some restrictions\(^4\) that make impossible to issue complex queries to the platform. According to the GitHub Developer Guide\(^4\), the restrictions are the following: - The Search API does not support queries that: - are longer than 256 characters (not including operators or qualifiers). - have more than five AND, OR, or NOT operators. - For authenticated requests can be made up to 30 requests per minute. For unauthenticated requests, the rate limit allows making up to 10 requests per minute. Furthermore, if the search is over source code files, especial restrictions apply\(^7\). A system named GHTorrent have been already developed to ease the interaction with the large quantity of \(^1\)https://github.com/ \(^2\)https://git-scm.com/ \(^3\)https://developer.github.com/v3 \(^4\)https://developer.github.com/v3/search/ \(^5\)https://github.com/search \(^6\)https://github.com/about \(^7\)https://developer.github.com/v3/search/#search-code information hosted in GitHub\(^8\) \([8]\). This solution is mainly conceived to mirror the data hosted in GitHub in order to facilitate parallel access and studies on snapshots of the data, but does not provide an alternative to making complex queries to GitHub. In fact, this system has its own restrictions on the quantity of data that can be accessed at any time\(^9\) \(^10\). Also, the system only provides snapshots for a reduced set of projects\(^11\) \(^12\). Moreover, its design is centered only on the interaction with the repositories of GitHub. This means, for example, that search on source code is not allowed. Furthermore, the objective of the system is to interact with GitHub, which means that a future interaction with other platforms is not currently conceived. A different kind of alternative is GH Archive\(^13\) which records events form GitHub\(^14\). The recorded data can be accessed through BigQuery\(^15\) which allows any kind of SQL-like queries. GH Archive, although a powerful and flexible solution, does not constitute an alternative to explore the data stored in GitHub but a tool to explore the data that represents the interaction with GitHub. This means that, for example, searching inside public source code cannot be done with GH Archive. Moreover, both of these systems are server like development tools and not client applications ready to use for making queries. In the context of this article, complex queries are those that have many logical connectives and sub-expressions –for example: \(A \text{ OR } (C \text{ AND } (D \text{ OR } E))\)– especially those that exceed the allowed number of logical operators. By getting the results number of queries of this kind, analysis of the current presence of technologies might be done. Although many reporting tools has been developed none of them are capable of getting the number amount of complex queries directly to GitHub. Some of these tools are listed in \(\text{https://www.gharchive.org/}\). Another example not listed in previous URL is \(\text{https://www.programcreek.com/}\). In that case the reports are just for statically-selected libraries from statically-selected languages. In this report, it is described a simple solution, named QuantityEr\(^16\), to obtain the search results number of complex queries directly to GitHub. The proposed design was conceived with the aim of extension in mind, in such a way that it would be possible to incorporate the ability to interact with other similar platforms besides GitHub as well as other queries languages and algorithms for obtaining the amount of search results. --- \(^8\)http://ghtorrent.org/ \(^9\)http://ghtorrent.org/raw.html \(^10\)http://ghtorrent.org/mysql.html \(^11\)http://ghtorrent.org/mongo.html \(^12\)http://ghtorrent.org/relational.html \(^13\)https://www.gharchive.org/ \(^14\)https://developer.github.com/v3/activity/events/types/ \(^15\)https://developers.google.com/apps-script/advanced/bigquery \(^16\)Source code accessible from \(\text{https://github.com/EStog/QuantityEr/tree/v0.1}\) The current document is structured in the following manner. Section exposes some mathematical definitions and concepts necessary to understand the proposed solution. Section describes the proposed solution as well as some usage examples. Section makes the final remarks and conclude. **Mathematical background** In order to understand the proposed solution, some mathematical background is necessary. To archive a self-contained report, in this section is mentioned the principal mathematical concepts used in the design of the solution. The following definitions (or equivalent ones) as well of other complementary concepts and prods can be found in the cited references [9–17]. The following notations will be used in this report. - \( \wp(A) \) denotes the power set of a set \( A \), that is the set of all subsets of \( A \). - \( |A| \) denotes the cardinality of a set \( A \), that is the number of elements in \( A \). - \( \emptyset \) denotes the empty set. **Boolean algebras** The first essential concept important to the design of the proposed solution is that of Boolean algebra. **Definition 1.** A Boolean algebra is a tuple \((S, +, \cdot, ', \perp, \top)\) where \( S \) is a set containing distinct elements \( \perp \) and \( \top \), \( + \) and \( \cdot \) are binary operators on \( S \) and \( ' \) is a unary operator on \( S \). Every Boolean algebra satisfies the following laws for all \( x, y, z \in S \). - **Commutative laws:** \( x + y = y + x \quad x \cdot y = y \cdot x \) - **Distributive laws:** \( x \cdot (y + z) = (x \cdot y) + (x \cdot z) \quad x + (y \cdot z) = (x + y) \cdot (x + z) \) - **Identity laws:** \( x + \perp = x \quad x \cdot \top = x \) - **Complement laws:** \( x + x' = \top \quad x \cdot x' = \perp \) AssOCIATIVE AND IdEMPOTENT LAws, as well as other laws can be also considered since they follow from the definition laws. Furthermore, other useful operators can be derived from the previous ones [12,14,16]. **Fact 1.** In a Boolean algebra \((S, +, \cdot, ', \perp, \top)\) the following laws are satisfied for all \( x, y, z \in S \): Associative laws: \[ x + (y + z) = (x + y) + z \quad \text{and} \quad x \cdot (y \cdot z) = (x \cdot y) \cdot z \] Idempotent laws: \[ x + x = x \quad \text{and} \quad x \cdot x = x \] Boolean algebras are used to model operations over the elements of a set that relates two elements with the maximum (+ operation) or the minimum (\( \cdot \) operation) of both elements in a partial order where the minimum and the maximum are \( \bot \) and \( \top \), respectively. In other words, a partial order \( \leq \) can be defined over \( S \) where \[ \forall a, b \in S \ (a \leq b \iff a + b = b) \] or equivalently \[ \forall a, b \in S \ (a \leq b \iff a \cdot b = a) \] and \[ \forall a \in S \ (\bot \leq a \leq \top) \] [14]. Also, intuitively speaking, all the elements have an associated complement counterpart that together form the maximum but apart form the minimum as stated in the complement laws. **Fact 2.** *The tuple \((\{0,1\}, \lor, \land, \neg, 0, 1)\) is a Boolean algebra with the operations of disjunction (\( \lor \)), conjunction (\( \land \)) and negation (\( \neg \)) defined as follow.* <table> <thead> <tr> <th>( \lor )</th> <th>0</th> <th>1</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>0</td> <td>1</td> </tr> <tr> <td>1</td> <td>1</td> <td>1</td> </tr> </tbody> </table> <table> <thead> <tr> <th>( \land )</th> <th>0</th> <th>1</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>1</td> <td>0</td> <td>1</td> </tr> </tbody> </table> <table> <thead> <tr> <th>( x )</th> <th>( \neg x )</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>1</td> </tr> <tr> <td>1</td> <td>0</td> </tr> </tbody> </table> This is the most elemental Boolean algebra and is the one found in classical binary logic that has applications in several areas of computer sciences [10,12–14]. **Fact 3.** *The tuple \((\wp(U), \cup, \cap, C, \emptyset, U)\) is a Boolean algebra with the operation of union (\( \cup \)), intersection (\( \cap \)) and complement (\( C \)) defined as follows for all \( X, Y \in \wp(U) \).* \[ X \cup Y = \{ x \mid x \in X \lor x \in Y \} \quad X \cap Y = \{ x \mid x \in X \land x \in Y \} \quad X^C = \{ x \mid x \notin X \} \] This specific Boolean algebra is of great interest in science since mathematics in general are founded in set theory [11–14]. In this specific work, the last two described Boolean algebras are crucial because the current problem is to find the number of objects that makes true a logical sentence. In this context, the logical sentence is the query to be issue to the platform. The proposed solution takes advantage of the equivalences between classical logic and set theory in the context of Boolean algebras to solve this problem. **Boolean functions** In some contexts, the combination of operations in the set $\{0, 1\}$ are called Boolean functions. The following definitions relate to this subject. **Definition 2** (Boolean function). A Boolean function of degree $n$ is a function $f : \{0, 1\}^n \to \{0, 1\}$ where $f$ is an atom (a single variable or value) or a composition of the operations $\land, \lor$ and $\neg$ of the Boolean algebra $\langle \{0, 1\}, \lor, \land, \neg, 0, 1 \rangle$. This composition is called a Boolean expression, and the variables of the Boolean expression are called Boolean variables. This concept has wide application in logic gates circuits design. In this topic one of the main problems is the simplification of Boolean expressions [9,12,14,16]. In the case of this work, these are of great importance because, as we will see, each query has an associated Boolean expression. The objective is to simplify it in order to obtain an expression that involves less computation. The simplification of a Boolean expressions may be done symbolically by applying the laws of a Boolean algebra (definition 1) but also by applying specific methods that simplify an equivalent form of the expression. **Definition 3.** Two Boolean expressions $A(x_1, x_2, \ldots, x_n)$ and $B(x_1, x_2, \ldots, x_n)$ are equivalent if $\forall x_1, x_2, \ldots, x_n \in \{0, 1\} \left( A(x_1, x_2, \ldots, x_n) = B(x_1, x_2, \ldots, x_n) \right)$. **Definition 4.** A normal form of a Boolean expression $f(x_1, x_2, \ldots, x_n)$ is an equivalent Boolean expression in the form $g(x_1, x_2, \ldots, x_n) = t_1 \ast t_2 \ast \ldots \ast t_m$ where each $t_{1\leq i \leq m}$ is in the form $y_1 \ast y_2 \ast \ldots \ast y_k \leq n$ and each $y_{1\leq j \leq k}$ is in the form $x_k$ or $\neg x_k$ where $1 \leq k \leq n$. When $\ast$ is $\land$ and $\ast$ is $\lor$ the normal form is called conjunctive (CNF). Similarly, when $\ast$ is $\lor$ and $\ast$ is $\land$ the normal form is called disjunctive (DNF). Additionally, when the normal form is conjunctive each $t_{1\leq i \leq m}$ is called a maxterm. Similarly, when the normal form is disjunctive each $t_{1\leq i \leq m}$ is called a minterm. The Quine-McCluskey algorithm is one of such methods that uses the normal form of a Boolean expression, specifically DNF, to obtain an equivalent minimal expression. The algorithm, in essence, test combinations of the minterms in order to find those that are essential to represent the value of the expression. It is known that it does not performance well when the size of the input, in this case the expression to simplify, is big. In fact, the problem of simplification of Boolean expressions is considered NP-hard [12,14,16]. However, the simplification of a Boolean expression is steel of great importance to this work, because small queries are preferable to big ones. **Definition 5.** Let $X_1, X_2, \ldots, X_n$ be given sets. A predicate is a function $P: X_1 \times X_2 \times \ldots, X_n \rightarrow \{0,1\}$ [10,13]. It obvious that a predicate has an associated Boolean expression if each atom is replaced by a Boolean variable. **Definition 6.** The expression $S = \{x \mid P(x)\}$ is equivalent to $x \in S \iff P(x)$ [11]. The following theorem will be useful in the modeling of the solution. **Theorem 1.** The following relations are satisfied for any $A = \{x \mid P(x)\}$ and $B = \{x \mid Q(x)\}$: \[(a) \ A \cup B = \{x \mid P(x) \lor Q(x)\}\] \[(b) \ A \cap B = \{x \mid P(x) \land Q(x)\}\] \[(c) \ A^c = \{x \mid \neg P(x)\}\] **Demonstration 1.** Proof follows directly from fact 3 and definition 6. This relations may be easily understood, since if $A$ contains all the elements $x$ such that $P(x) = 1$ and $B$ is all the elements $x$ such that $Q(x) = 1$ then it follows –from the definition 6 and the definition of union in the fact 3– that $A \cup B$ will have the elements $x$ such that $P(x) \lor Q(x) = 1$. The same analysis can be done for the intersection and complement cases. **Inclusion-exclusion principle** First let consider the cardinality of the power set. This will be useful later in the description of the proposed solution. Fact 4. The cardinality of the power set of $U$ is $$|\mathcal{P}(U)| = 2^{|U|}$$ [13]. The inclusion-exclusion principle (IEP) is a mathematical formula that can be used to obtain the cardinality of the union of finite sets taking into account the cardinality of all possible intersections of the given sets. Fact 5 (Inclusion-exclusion principle). The cardinality of the union of sets $S_i \in \{1, 2, \ldots, n\}$ is $$\left| \bigcup_{i=1}^{n} S_i \right| = \sum_{\emptyset \neq J \subseteq \{1, 2, \ldots, n\}} (-1)^{|J|+1} \left| \bigcap_{j \in J} A_j \right|$$ The number of every possible intersection of $n$ sets is the same that the number of subsets of a set of $n$ elements without counting the empty set. This leads to the following fact taking into account fact 4. Fact 6. There are $$2^n - 1$$ terms in the inclusion-exclusion principle formula for $n$ sets. This means that an algorithm that calculates the cardinality of the union of $n$ sets by directly using the IEP have an exponential complexity [15,17]. In the proposed solution the IEP is used to decompose a given query in many smaller sub-queries that will be issued to the platform search API. In the next section, will be shown how to manage the problem of the exponential complexity when using this method. Results and discussion The problem to solve is: How to get the results number of complex queries to GitHub? The proposed solution follows a divide and conquer approach as follows: 1. Simplify and decompose complex queries into smaller simple sub-queries. 2. Issue the sub-queries to the server and obtain the results amount of each one. 3. Sum up the results of the sub-queries into one that constitutes the results amount of the initial complex query. In the next subsection a mathematical model and formalization of the solution is given. **Mathematical model** Mathematically speaking, the problem to solve is as follows. Let $O$ be the set of all the objects in the platform (projects, source codes, etc). Let $Q : O \to \{0, 1\}$ be a predicate that represents the query to issue. Then, the set $r_Q$ of all objects that match the query $Q$ is $$r_Q = \{ o \mid Q(o) \}$$ The problem to solve is finding $|r_Q|$ when the associated Boolean expression given by $Q$ has many compositions and logical connectives. The first step of the proposed solution is to simplify the Boolean expression associated to the query. This may be done by symbolic transformations applying the laws that a Boolean algebra satisfies or also by using the Quine-McCluskey algorithm. It is known that this solution is not effective when the size of the input is too big. For this reason, the resultant expression (simplified or not) must be decomposed into various sub-expressions. For this purpose, the DNF expression is used. By applying theorem 1 it is known that if $Q(o) = Q_1(o) \lor Q_2(o) \lor \ldots \lor Q_n(o)$ is the DNF then $$r_Q = \{ o \mid Q_1(o) \lor Q_2(o) \lor \ldots \lor Q_n(o) \} = r_{Q_1} \cup r_{Q_2} \cup \ldots \cup r_{Q_n}$$ where $$r_{Q_i} = \{ o \mid Q_i(o) \}$$ for each $1 \leq i \leq n$. Each $Q_i(o)$ is in the form $Q_{i_1}(o) \land Q_{i_2}(o) \land \ldots \land Q_{i_m}(o)$. This kind of query can be issued directly to GitHub because it does not have composition and only have conjunctive connectives. The conjunctives connectives (AND in the query language of GitHub) can be stripped of the sub-query since GitHub automatically interprets a tuple of atoms as a conjunction. In this case there is no use of conjunctive or disjunctive connectives in the query. Nevertheless, the case of the negation is a problem that, for now, cannot be avoided. So, in this case, a query must be designed with care in order not to exceed the restriction that GitHub Search API imposes in the number of operators. After the sub-queries have been sent, the next step is to find the results amount of the main query by applying IEP (fact 5). The problem with this approach is that the number of terms –according to fact 6– in IEP formula with \( n \) sets is \( 2^n - 1 \), which is the number of sub-queries to be issued to the server. However, each term in IEP is of the form of an intersection. Moreover, the terms in the expression associated to the DNF are also in the form of intersection. Then, by applying fact 1, that it is possible to reduce each term of the IEP formula so that some terms might be repeated afterwards. For this reason, it is proposed to use a cache for storing already issued queries as well as its respective results quantities in order to reduce the number of issued queries. However, work still need to be done to accelerate the computations of the terms in the IEP formula. **Solution design** QuantityEr is designed by using the object-oriented paradigm. Care on extension has been taken from the beginning by assigning a class to each sub-process in the solution. In figure 1 is outlined the class diagram of the most important classes. The classes are given as abstract base classes, so they must be extended for a particular problem. Currently, the extensions for solving the problem in the specific case of GitHub are implemented. Next, it is briefly described each class. **Main:** Coordinate the interaction between the Input, Engine and Output classes objects. That is, the main algorithm is implemented inside this class. **Input:** Currently, the queries can be presented to QuantityEr from two sources: the command line and files. Several queries can be presented to the application in one single execution. The responsibility of this class is to present these sources as a stream to the Parser. Since the logic of the input is encapsulated in one class, other kind of inputs may be added in the future like, for example, inputs from the network. **Parser:** Translate the queries presented as input to a standard language that can be managed by the other entities. Since the logic of parsing is encapsulated in one class the syntax of the language used in the input queries do not need to be like the one expected by GitHub. This may ease the input allowing a cleaner syntax. **MiddleCode:** Represents the intermediate language that the other classes understand. All the queries inside the application are in this format. Engine: Coordinate the interaction between the Decomposer, Cache, Translator and QueryIssuer classes objects. That is, the algorithm that give the solution to the problem is implemented inside this class. Decomposer: Decompose a complex query into several smaller simple queries. Currently, the extension using IEP is implemented. Cache: Store the results amounts of already issued queries. Currently, an in-memory cache is available as well as a file-based one. Translator: Translate a given simple sub-query to an issuable one. Currently, only GitHub is supported but more platforms may be added in the future. QueryIssuer: Emit a simple sub-query to the platform and obtain the results amount or inform of an error if it was the case. ![Class diagram of main classes of QuantityEr](image) Figure 1. Class diagram of main classes of QuantityEr Execution example results In this section we consider a usage example result in order to study the behavior of the application with complex queries. In this case, the queries ask for the amount of source codes that use the classical synchronization mechanisms defined in the asyncio, multiprocessing and threading Python libraries. The results are summarized in table 1 and figure 2. The command lines options to the program, the actual output, the presented queries as well as other execution example can be found in attached document examples.html\(^\text{17}\). Table 1. Execution example results summary. # means quantity. % means percent. <table> <thead> <tr> <th>No.</th> <th>Queries libraries</th> <th>Amount</th> <th>Total</th> <th>Cache</th> <th>%</th> <th>Issue</th> <th>%</th> </tr> </thead> <tbody> <tr> <td>01</td> <td>asyncio</td> <td>69 053</td> <td>15</td> <td>0</td> <td>0</td> <td>15</td> <td>100</td> </tr> <tr> <td>02</td> <td>multiprocessing</td> <td>159 515</td> <td>31</td> <td>0</td> <td>0</td> <td>31</td> <td>100</td> </tr> <tr> <td>03</td> <td>threading</td> <td>1 451 344</td> <td>31</td> <td>0</td> <td>0</td> <td>31</td> <td>100</td> </tr> <tr> <td>04</td> <td>asyncio ∩ multiprocessing</td> <td>3 095</td> <td>16 383</td> <td>16 353</td> <td>99.82</td> <td>30</td> <td>0.18</td> </tr> <tr> <td>05</td> <td>asyncio ∩ threading</td> <td>29 658</td> <td>16 383</td> <td>16 353</td> <td>99.82</td> <td>30</td> <td>0.18</td> </tr> <tr> <td>06</td> <td>multiprocessing ∩ threading</td> <td>124 327</td> <td>31</td> <td>0</td> <td>0</td> <td>31</td> <td>100</td> </tr> <tr> <td>07</td> <td>asyncio ∩ multiprocessing ∩ threading</td> <td>1 947</td> <td>16 383</td> <td>16 353</td> <td>99.82</td> <td>30</td> <td>0.18</td> </tr> <tr> <td>08</td> <td>asyncio ∪ multiprocessing</td> <td>228 130</td> <td>511</td> <td>435</td> <td>85.13</td> <td>76</td> <td>14.87</td> </tr> <tr> <td>09</td> <td>asyncio ∪ threading</td> <td>1 494 420</td> <td>511</td> <td>435</td> <td>85.13</td> <td>76</td> <td>14.87</td> </tr> <tr> <td>10</td> <td>multiprocessing ∪ threading</td> <td>1 489 850</td> <td>1 023</td> <td>930</td> <td>90.91</td> <td>93</td> <td>9.09</td> </tr> <tr> <td>11</td> <td>asyncio ∪ multiprocessing ∪ threading</td> <td>1 528 155</td> <td>16 383</td> <td>16 185</td> <td>98.79</td> <td>198</td> <td>1.21</td> </tr> </tbody> </table> \(^{17}\)Downloadable from [https://github.com/EStog/QuantityEr/blob/v0.1/running/jupyterlab/examples.html](https://github.com/EStog/QuantityEr/blob/v0.1/running/jupyterlab/examples.html) In table 1 and figure 2 can be seen that the number of sub-queries depend on the ability of the Python’s\textsuperscript{18} Sympy\textsuperscript{19} library to simplify the given expression. Also, in this case, the presence of the cache effects a great reduction on the number of issued queries, especially when the number of sub-queries is big. Conclusions In this report a tool, named QuantityEr, to obtain the results number of complex queries to GitHub search API has been described. The application uses the inclusion-exclusion principle and other mathematical abstractions to decompose the query in several simple sub-queries. The application uses a cache in order to reduce the number of sub-queries issued to the server. Even though it is considered that the use of the cache improves the solution and makes it viable, more work may to be done in order to accelerate the computations of the IEP formula terms. Moreover, the application may be extended to resolve other restrictions problems in GitHub and other platforms. \textsuperscript{18}https://www.python.org/ \textsuperscript{19}https://docs.python.org/3.7/ \textsuperscript{20}https://www.sympy.org/en/ \textsuperscript{21}https://docs.sympy.org/latest/index.html References Roles de Autoría **Ernesto Soto Gómez**: Conceptualización, Análisis formal, Investigación, Metodología, Software, Validación, Redacción - borrador original.
{"Source-Url": "https://revistas.ulasalle.edu.pe/innosoft/article/download/14/2/", "len_cl100k_base": 7539, "olmocr-version": "0.1.51", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 40536, "total-output-tokens": 9485, "length": "2e12", "weborganizer": {"__label__adult": 0.0002994537353515625, "__label__art_design": 0.00041556358337402344, "__label__crime_law": 0.0002963542938232422, "__label__education_jobs": 0.0017976760864257812, "__label__entertainment": 7.390975952148438e-05, "__label__fashion_beauty": 0.00013530254364013672, "__label__finance_business": 0.0002799034118652344, "__label__food_dining": 0.00029397010803222656, "__label__games": 0.0005283355712890625, "__label__hardware": 0.000644683837890625, "__label__health": 0.0003964900970458984, "__label__history": 0.0002522468566894531, "__label__home_hobbies": 0.00013720989227294922, "__label__industrial": 0.0004184246063232422, "__label__literature": 0.00034737586975097656, "__label__politics": 0.000186920166015625, "__label__religion": 0.00040078163146972656, "__label__science_tech": 0.0313720703125, "__label__social_life": 0.0001270771026611328, "__label__software": 0.0113372802734375, "__label__software_dev": 0.94970703125, "__label__sports_fitness": 0.0002124309539794922, "__label__transportation": 0.0003669261932373047, "__label__travel": 0.00015819072723388672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31285, 0.04346]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31285, 0.46477]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31285, 0.81397]], "google_gemma-3-12b-it_contains_pii": [[0, 3172, false], [3172, 5365, null], [5365, 8436, null], [8436, 10544, null], [10544, 12498, null], [12498, 15228, null], [15228, 17212, null], [17212, 18766, null], [18766, 20903, null], [20903, 23495, null], [23495, 24498, null], [24498, 26242, null], [26242, 27477, null], [27477, 29904, null], [29904, 31285, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3172, true], [3172, 5365, null], [5365, 8436, null], [8436, 10544, null], [10544, 12498, null], [12498, 15228, null], [15228, 17212, null], [17212, 18766, null], [18766, 20903, null], [20903, 23495, null], [23495, 24498, null], [24498, 26242, null], [26242, 27477, null], [27477, 29904, null], [29904, 31285, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31285, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31285, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31285, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31285, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31285, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31285, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31285, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31285, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31285, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31285, null]], "pdf_page_numbers": [[0, 3172, 1], [3172, 5365, 2], [5365, 8436, 3], [8436, 10544, 4], [10544, 12498, 5], [12498, 15228, 6], [15228, 17212, 7], [17212, 18766, 8], [18766, 20903, 9], [20903, 23495, 10], [23495, 24498, 11], [24498, 26242, 12], [26242, 27477, 13], [27477, 29904, 14], [29904, 31285, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31285, 0.11737]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
49e431107c0519de5b2171e3b1d41265248d196c
Abstract This draft describes how LISP control-plane messages can be individually authenticated and authorized without a priori shared-key configuration. Public-key cryptography is used with no new PKI infrastructure required. Status of This Memo This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress." This Internet-Draft will expire on March 8, 2019. Copyright Notice Copyright (c) 2018 IETF Trust and the persons identified as the document authors. All rights reserved. This document is subject to BCP 78 and the IETF Trust’s Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. 1. Introduction The LISP architecture and protocols [RFC6830] introduces two new numbering spaces, Endpoint Identifiers (EIDs) and Routing Locators (RLOCs) which provide an architecture to build overlays on top of the underlying Internet. Mapping EIDs to RLOC-sets is accomplished with a Mapping Database System. EIDs and RLOCs come in many forms than just IP addresses, using a general syntax that includes Address Family Identifier (AFI) [RFC1700]. Not only IP addresses, but other addressing information have privacy requirements. Access to private information is granted only to those who are authorized and authenticated. Using asymmetric keying with public key cryptography enforces authentication for entities that read from and write to the mapping system. The proposal described in this document takes advantage of the latest in Elliptic Curve Cryptography. In this proposal the EID is derived from a public key, and the corresponding private key is used to authenticate and authorize Map-Register messages. Thus only the owner of the corresponding private key can create and update mapping entries from the EID. Furthermore, the same approach is used to authenticate Map-Request messages. This in combination with the mapping database containing authorization information for Map-Requests is used to restrict which EIDs can lookup up the RLOCs for another EID. This specification introduces how to use the Distinguished-Name API [AFI] and the [RFC8060] LCAF JSON Type to encode public keys and signatures in the LISP mapping database. The information in the mapping database is used to verify cryptographic signatures in LISP control-plane messages such as the Map-Request and Map-Register. 2. Definition of Terms Crypto-EID: is an IPv6 EID where part of the EID includes a hash value of a public-key. An IPv6 EID is a Crypto-EID when the Map-Server is configured with an Crypto-EID Prefix that matches the IPv6 EID. Crypto-EID Hash Length: is the number of low-order bits in a Crypto-EID which make up the hash of a public-key. The hash length is determined by the Map-Server when it is configured with a Crypto-EID Prefix. Crypto-EID Prefix: is a configuration parameter on the Map-Server that indicates which IPv6 EIDs are Crypto-EIDs and what is the Crypto-EID Hash Length for the IPv6 EID. This can be different for different LISP Instance-IDs. Hash-EID: is a distinguished name EID-record stored in the mapping database. The EID format is ‘hash-<pubkey-hash>‘. When a key-pair is generated for an endpoint, the produced private-key does not leave the xTR that will register the Crypto-EID. A hash of the public-key is used to produce a Crypto-EID and a Hash-EID. The Crypto-EID is assigned to the endpoint and the xTR that supports the LISP-site registers the Crypto-EID. Another entity registers the Hash-EID mapping with the public-key as an RLOC-record. Public-Key RLOC: is a JSON string that encodes a public-key as an RLOC-record for a Hash-EID mapping entry. The format of the JSON string is ‘{ "public-key" : "<pubkey>" }’. Control-Plane Signature: a Map-Request or Map-Register sender signs the message with its private key. The format of the signature is a JSON string that includes sender information and the signature value. The JSON string is included in Map-Request and Map-Register messages. Signature-ID: is a Crypto-EID used for a Control-Plane signature to register or request any type of EID. The Signature-ID is included with the JSON-encoded signature in Map-Request and Map-Register messages. Multi-Signatures: multiple signatures are used in LISP when an entity allows and authorized another entity to register an EID. There can be more than one authorizing entities that allow a registering entity to register an EID. The authorizing entities sign their own RLOC-records that are registered and merged into the registering entity’s Hash-EID public-key mapping. And when the registering entity registers the EID, all authorizing entity signatures must be verified by the Map-Server before the EID is accepted. 3. Overview LISP already has several message authentication mechanisms. They can be found in [I-D.ietf-lisp-rfc6833bis], [I-D.ietf-lisp-sec], and [RFC8061]. The mechanisms in this draft are providing a more granular level of authentication as well as a simpler way to manage keys and passwords. A client of the mapping system can be authenticated using public-key cryptography. The client is required to have a private/public key-pair where it uses the private-key to sign Map-Requests and Map-Registers. The server, or the LISP entity, that processes Map-Requests and Map-Registers uses the public-key to verify signatures. The following describes how the mapping system is used to implement the public-key crypto system: 1. An entity registers Hash-EID to Public-Key RLOC mappings. A third-party entity that provides a service can register or the client itself can register. 2. Anyone can lookup the Hash-EID mappings. These mappings are not usually authenticated with the mechanisms in this draft but use the shared configured password mechanisms from [I-D.ietf-lisp-rfc6833bis] that provide group level authentication. 3. When a Crypto-EID, or any EID type, is registered to the mapping system, a signature is included in the Map-Register message. When a non-Crypto-EID is registered a Signature-ID is also included in the Map-Register message. 4. The Map-Server processes the registration by constructing the Hash-EID from the registered Crypto-EID, looks up the Hash-EID in the mapping system, obtains the public-key from the RLOC-record and verifies the signature. If Hash-EID lookup fails or the signature verification fails, the Map-Register is not accepted. 5. When a Crypto-EID, or any EID type, is looked up in the mapping system, a signature is included with a Signature-ID in the Map-Request message. 6. The Map-Server processes the request for a Crypto-EID by constructing the Hash-EID from the Signature-ID included in the Map-Request. The signer-ID is a Crypto-EID that accompanies a signature in the Map-Request. The Hash-EID is looked up in the mapping system, obtains the public-key from the RLOC-record and verifies the Map-Request signature. If the Hash-EID lookup fails or the signature verification fails, the Map-Request is not accepted and a Negative Map-Reply is sent back with an action of "auth-failure". 4. Public-Key Hash When a private/public key-pair is created for a node, its IPv6 EID is pre-determined based on the public key generated. Note if the key-pair is compromised or is changed for the node, a new IPv6 EID is assigned for the node. The sha256 [RFC6234] hex digest function is used to compute the hash. The hash is run over the following hex byte string: <iid><prefix><pubkey> Where each field is defined to be: <iid>: is a 4-byte (leading zeroes filled) binary value of the Instance-ID the EID will be registered with in the mapping database. For example, if the instance-id is 171, then the 4-byte value is 0x000000ab. <prefix>: is a variable length IPv6 prefix in binary format (with no colons) and IS quad-nibble zero-filled. The length of the prefix is 128 minus the Crypto-EID hash bit length. For example, if the prefix is 2001:5:3::/48, then the 6 byte value is 0x200100050003. <pubkey>: is a DER [RFC7468] encoded public-key. The public-key hash is used to construct the Crypto-EID and Hash-EID. 5. Hash-EID Mapping Entry A Hash-EID is formatted in an EID-record as a Distinguished-Name AFI as specified in [I-D.farinacci-lisp-name-encoding]. The format of the string is: EID-record: 'hash-<hash-eid>' Where <hash-eid> is a public-key hash as described in Section 4. The RLOC-record to encode and store the public-key is in LCAF JSON Type format of the form: RLOC-record: '{ "public-key" : "<pubkey-base64>" }' Where <pubkey-base64> is a base64 [RFC4648] encoding of the public-key generated for the system that is assigned the Hash-EID. 6. Hash-EID Structure Since the Hash-EID is formatted as a distinguished-name AFI, the format of the <hash-eid> for EID 'hash-<hash-eid>' needs to be specified. The format will be an IPv6 address [RFC3513] where colons are used between quad-nibble characters when the hash bit length is a multiple of 4. And when the hash bit length is not a multiple of 4 but a multiple of 2, a leading 2 character nibble-pair is present. Here are some examples for different hash bit lengths: Crypto-EID: 2001:5::1111:2222:3333:4444, hash length 64: Hash-EID is: 'hash-1111:2222:3333:4444' Crypto-EID: 2001:5::11:22:33:44, hash length 64: Hash-EID is: 'hash-0011:0022:0033:0044' Hash-EID is: 'hash-bbbb:1111:2222:3333:4444' Hash-EID is: 'hash-bb:1111:2222:3333:4444' Hash-EID is: 'hash-bb:1111:0022:0033:4444' Note when leading zeroes exist in a IPv6 encoded quad between colons, the zeros are included in the quad for the Hash-EID string. The entity that creates the hash, the entity that registers the Crypto-EID and the Map-Server that uses the hash for Hash-EID lookups MUST agree on the hash bit length. 7. Keys and Signatures Key generation, message authentication with digital signatures, and signature verification will use the Elliptic Curve Digital Signature Algorithm or ECDSA [X9.62]. For key generation curve 'NIST256p' is used and recommended. Signatures are computed over signature data that depends on the type of LISP message sent. See Section 8 and Section 9 for each message type. The signature data is passed through a sha256 hash function before it is signed or verified. 8. Signed Map-Register Encoding When a ETR registers its Crypto-EID or any EID type to the mapping system, it builds a LISP Map-Register message. The mapping includes an EID-record which encodes the Crypto-EID, or any EID type, and an RLOC-set. One of the RLOC-records in the RLOC-set includes the the ETR’s signature and signature-ID. The RLOC-record is formatted with a LCAF JSON Type, in the following format: { "signature" : "<signature-base64>", "signature-id" : "<signer-id>" } Where <signature-base64> is a base64 [RFC4648] encoded string over the following ascii [RFC0020] string signature data: [<iid>]<crypto-eid> Where <iid> is the decimal value of the instance-ID the Crypto-EID is registering to and the <crypto-eid> is in the form of [RFC3513] where quad-nibbles between colons ARE NOT zero-filled. The Map-Server that process an EID-record with a Crypto-EID and a RLOC-record with a signature extracts the public-key hash value from the Crypto-EID to build a Hash-EID. The Map-Server looks up the Hash-EID in the mapping system to obtain the public-key RLOC-record. The Map-Server verifies the signature over the signature data to determine if it should accept the EID-record registration. 9. Signed Map-Request Encoding When an xTR (an ITR, PITR, or RTR), sends a Map-Request to the mapping system to request the RLOC-set for a Crypto-EID, it signs the Map-Request so it can authenticate itself to the Map-Server the Crypto-EID is registered to. The Map-Request target-EID field will contain the Crypto-EID and the source-EID field will contain an LCAF JSON Type string with the following signature information: ``` { "source-eid" : "<seid>", "signature-id" : "<signer-id>", "signature" : "<signature-base64>" } ``` Where <signer-id> is an IPv6 encoded string according to [RFC3513] where quad-nibbles between colons ARE NOT zero-filled. The <seid> is the source EID from the data packet that is invoking the Map-Request or the entire key/value pair for "source-eid" can be excluded when a data packet did not invoke the Map-Request (i.e. lig or an API request). The <signer-id> is the IPv6 Crypto-EID of the xTR that is providing the Map-Request signature. The signature string <signature-base64> is a base64 [RFC4648] encoded string over the following signature data: ``` <nonce><source-eid><crypto-eid> ``` Where <nonce> is a hex string [RFC0020] of the nonce used in the Map-Request and the <source-eid> and <crypto-eid> are hex strings [RFC0020] of an IPv6 address in the form of [RFC3513] where quad-nibbles between colons ARE NOT zero-filled. When <seid> is not included in the Map-Request, string "0::0" is used for <source-eid>. 10. Signed Map-Notify Encoding When a Map-Server originates a Map-Notify message either as an acknowledgment to a Map-Register message, as a solicited [I-D.ietf-lisp-pubsub] notification, or an unsolicited [RFC8378] notification, the receiver of the Map-Notify can verify the message is from an authenticated Map-Server. An RLOC-record similar to the one used to sign Map-Register messages is used to sign the Map-Notify message: ``` { "signature" : "<signature-base64>", "signature-id" : "<signer-id>" } ``` Where the "signature-id" is an IPv6 crypto-EID used by the Map-Server to sign the RLOC-record. The signature data and the encoding format of the signature is the same as for a Map-Register message. See details in Section 8. A receiver of a Map-Notify message will lookup the signature-id in the mapping system to obtain a public-key to verify the signature. The Map-Notify is accepted only if the verification is successful. 11. Other Uses The mechanisms described within this document can be used to sign other types of LISP messages. And for further study is how to use these mechanisms to sign LISP encapsulated data packets in a compressed manner to reduce data packet header overhead. In addition to authenticating other types of LISP messages, other types of EID-records can be encoded as well and is not limited to IPv6 EIDs. It is possible for a LISP xTR to register and request non IPv6 EIDs but use IPv6 Crypto-EIDs for the sole purpose of signing and verifying EID-records. Examples of other EID types that can be authenticated in Map-Request and Map-Register messages are: ### 12. EID Authorization When a Crypto-EID is being used for IPv6 communication, it is implicit that the owner has the right to use the EID since it was generated from the key-pair provisioned for the owner. For other EID types that are not directly associated with signature keys, they must be validated for use by the mapping system they are registered to. This policy information for the mapping system must be configured in the Map-Servers the EID owner registers to or a signed authorization provided by a third-party entity. To achieve signed authorization, an entity that allows another entity to register an EID, must authorize the registering entity. It does so by adding RLOC-records to the registering entity’s Hash-EID public-key mapping. The format of the RLOC-record is a JSON encoded record as follows: ```json { "allow-eid" : "<eid>", "signature-id" : "<signer-id>", "signature" : "<signature-base64>" } ``` The format of the `<signer-id>` and `<signature-base64>` values are the same as described in Section 8. The `<eid>` value is in the same string format as the signature data described in Section 8. For other non-IPv6 EID types, the conventions in [RFC8060] are used. In all cases, the string encoding format of instance-ID ‘<iid>’ prepended is to the EID string. This entry is added to the RLOC-set of the registering entity’s Hash-EID ‘hash-hash’ registration. The authorizing entity does signs the Map-Register and sends it with merge-semantics. The Map-Server accepts the registration after the signature is verified and merges the RLOC-record into the existing RLOC-set. The ‘signature’ is optional and when not included means the authorizing entity has not yet allowed the registering entity to register the EID <eid>. Note multiple entities can register RLOC-records with the same <eid> meaning that signature verification for all of them is required before the Map-Server accepts the registration. When the Map-Server receives a Map-Register for <eid>, it looks up ‘hash=<hash>’ EID in the mapping system. If not found, the Map-Register EID-record is not processed and the next EID-record is retrieved from the Map-Register message, if it exists. If the Hash-EID entry is found, the registering entity’s signature is verified first. If the verification fails, the Map-Register EID-record is not accepted. Otherwise, a search for the RLOC-set is done to look for all matches of the EID being registered with <eid>, for those entries found, if any of them do not have a "signature" JSON item, the EID-record is not accepted. Otherwise, the signature-id is looked up in the mapping system to retrieve the public-key of the authorizing entity. If the verification is successful, then a lookup for the next RLOC-record signature-id is done. Only when all signature verifications are verified, the Map-Register EID-record is accepted. The Map-Server should reject an RLOC-record with a signature-id that contains the Hash-EID of the entry disallowing a registering entity to self authorize itself. Here is an example of a Hash-EID mapping stored in the mapping system: EID-record: [1000]‘hash-1111:2222:3333:4444’, RLOC-Set (count is 4): RLOC-record: { "public-key" : "<pubkey-base64>" } This mapping stores the public-key of the registering entity with Hash-EID 1111:2222:3333:4444. The registering entity registered this RLOC-record. There are two authorizing entities, :1111 and :2222, who allow it to register IPv4 EID 1.1.1.1/32. They each registered their respective RLOC-records. And a third authorizing entity :5555 that registers an RLOC-record that has not yet authorized the registering entity to register Geo-Coordinate 37-16-46-N-121-52-4-W. Note the mapping and the signature-IDs are all within the context of instance-ID 1000. 13. Security Considerations The mechanisms within this specification are intentionally using accepted practices and state of the art public-key cryptography. Crypto-EIDs can be made private when control messages are encrypted, for instance, using [RFC8061]. The topological or physical location of a Crypto-EID is only available to the other Crypto-EIDs that register in the same LISP Instance-ID and have their corresponding Hash-EIDs registered. This draft doesn’t address reply attacks directly. If a man-in-the-middle captures Map-Register messages, it could send such captured packets at a later time which contains signatures of the source. In which case, the Map-Server verifies the signature as good and interprets the contents to be valid where in fact the contents can contain old mapping information. This problem can be solved by encrypting the contents of Map-Registers using a third-party protocol like DTLS [RFC6347] or LISP-Crypto [RFC8061] directly by encapsulating Map-Registers in LISP data packets (using port 4341). Map-Reply message signatures and authentication are not in scope for this document. This document focuses on authentication between xTRs and mapping system components. Map-Reply authentication, which is performed between xTRs is described in [I-D.ietf-lisp-sec]. 14. IANA Considerations Since there are no new packet formats introduced for the functionality in this specification, there are no specific requests for IANA. 15. References 15.1. Normative References 15.2. Informative References Appendix A. Acknowledgments A special thanks goes to Sameer Merchant and Colin Cantrell for their ideas and technical contributions to the ideas in this draft. Appendix B. Document Change Log [RFC Editor: Please delete this section on publication as RFC.] B.1. Changes to draft-farinacci-lisp-ecdsa-auth-03.txt - Posted September 2018. - Change all occurrences of signature-EID to signature-ID. - Document how Map-Servers sign Map-Notify messages so they can be verified by xTRs. - Add multi-signatures to mappings so a 3rd-party can allow an entity to register any type of EID. B.2. Changes to draft-farinacci-lisp-ecdsa-auth-02.txt - Draft posted April 2018. - Generalize text to allow Map-Requesting and Map-Registering for any EID type with a proper signature-EID and signature encoded together. B.3. Changes to draft-farinacci-lisp-ecdsa-auth-01.txt - Draft posted October 2017. - Make it more clear what values and format the EID hash is run over. - Update references to newer RFCs and Internet Drafts. B.4. Changes to draft-farinacci-lisp-ecdsa-auth-00.txt - Initial draft posted July 2017. Authors’ Addresses Dino Farinacci lispers.net San Jose, CA USA Email: farinacci@gmail.com
{"Source-Url": "https://tools.ietf.org/pdf/draft-farinacci-lisp-ecdsa-auth-03.pdf", "len_cl100k_base": 5267, "olmocr-version": "0.1.50", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 33487, "total-output-tokens": 7979, "length": "2e12", "weborganizer": {"__label__adult": 0.00035500526428222656, "__label__art_design": 0.00034999847412109375, "__label__crime_law": 0.0013113021850585938, "__label__education_jobs": 0.0005807876586914062, "__label__entertainment": 9.763240814208984e-05, "__label__fashion_beauty": 0.0001735687255859375, "__label__finance_business": 0.001033782958984375, "__label__food_dining": 0.00032520294189453125, "__label__games": 0.0005168914794921875, "__label__hardware": 0.00287628173828125, "__label__health": 0.0004963874816894531, "__label__history": 0.0004057884216308594, "__label__home_hobbies": 0.00010538101196289062, "__label__industrial": 0.0007395744323730469, "__label__literature": 0.00033164024353027344, "__label__politics": 0.0005421638488769531, "__label__religion": 0.0005092620849609375, "__label__science_tech": 0.2100830078125, "__label__social_life": 0.00011283159255981444, "__label__software": 0.047607421875, "__label__software_dev": 0.73046875, "__label__sports_fitness": 0.0002818107604980469, "__label__transportation": 0.0006952285766601562, "__label__travel": 0.00021791458129882812}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25979, 0.05523]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25979, 0.39076]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25979, 0.81796]], "google_gemma-3-12b-it_contains_pii": [[0, 1570, false], [1570, 2776, null], [2776, 3274, null], [3274, 5490, null], [5490, 6257, null], [6257, 8341, null], [8341, 10024, null], [10024, 11975, null], [11975, 14047, null], [14047, 15650, null], [15650, 17353, null], [17353, 19764, null], [19764, 21613, null], [21613, 23432, null], [23432, 24939, null], [24939, 25979, null], [25979, 25979, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1570, true], [1570, 2776, null], [2776, 3274, null], [3274, 5490, null], [5490, 6257, null], [6257, 8341, null], [8341, 10024, null], [10024, 11975, null], [11975, 14047, null], [14047, 15650, null], [15650, 17353, null], [17353, 19764, null], [19764, 21613, null], [21613, 23432, null], [23432, 24939, null], [24939, 25979, null], [25979, 25979, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25979, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25979, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25979, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25979, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25979, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25979, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25979, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25979, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25979, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25979, null]], "pdf_page_numbers": [[0, 1570, 1], [1570, 2776, 2], [2776, 3274, 3], [3274, 5490, 4], [5490, 6257, 5], [6257, 8341, 6], [8341, 10024, 7], [10024, 11975, 8], [11975, 14047, 9], [14047, 15650, 10], [15650, 17353, 11], [17353, 19764, 12], [19764, 21613, 13], [21613, 23432, 14], [23432, 24939, 15], [24939, 25979, 16], [25979, 25979, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25979, 0.0]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
1a5a81e5b08a8ba21b03e3a035b544cdbcf3d417
JAIN SIP Tutorial Serving the Developer Community Phelim O’Doherty Sun Microsystems Mudumbai Ranganathan NIST JAIN SIP is the standardized Java interface to the Session Initiation Protocol for desktop and server applications. JAIN SIP enables transaction stateless, transaction stateful and dialog stateful control over the protocol. Presentation Outline • What is SIP? • Why create JAIN SIP? • Introduction to JAIN SIP • Developer Code Snippets • Implementation Used-Cases Session Initiation Protocol - Session Initiation Protocol (SIP) is a signaling protocol for creating, modifying and destroying dialogs between multiple endpoints: - Request/response protocol (like HTTP, but peer-peer) - Simple and extensible - Designed for mobility (proxy/redirect servers) - Bi-directional authentication - Capability negotiation - SIP is used for controlling the signaling that enables manipulates of sessions such as: - Instant Messaging sessions - Phone calls over the Internet - Gaming servers - Resource Location SIP Functionality - SIP supports five facets of establishing and terminating multimedia communications these include: - User location: determination of the end system to be used for communication. - User capabilities: determination of the media and media parameters to be used. - User availability: determination of the willingness of the called party to engage in communications. - Call setup: "ringing", establishment of call parameters at both called and calling party. - Call handling: including transfer and termination of calls. Presentation Outline - What is SIP? - Why create JAIN SIP? - Introduction to JAIN SIP - Developer Code Snippets - Implementation Used-Cases Why Create JAIN SIP? • SIP is an IETF specification that has been adopted by the communications industry in the form of 3GPP, 3GPP2, OMA and ITU. • The IETF specification defines the SIP protocol in text format. • The SIP Community holds various interoperability events to ensure the credibility of the protocol. • As a developer you are free to implement the protocol in any language, hence define your own interface for accessing the defined behavior of the protocol as outlined by the IETF standard. • While IETF specification ensures interoperability between stacks, it doesn’t address interoperability of applications across stacks. • JAIN SIP satisfies this need in the Java programming language. It ensures true interoperability in that by utilizing the JAIN SIP specification you have interoperability between stacks and the interoperability of applications across stacks, often referred to as application portability. – Both stack interoperability and application portability are required in this new age of communication standards. Presentation Outline • What is SIP? • Why create JAIN SIP? • Introduction to JAIN SIP • Developer Code Snippets • Implementation Used-Cases JAIN SIP • The Java-standard interface to a SIP signaling stack. – Standardizes the interface to the stack. – Standardizes message interface. – Standardizes events and event semantics. – Application portability - verified via the TCK. • Designed for developers who require powerful access to the SIP protocol. • JAIN SIP can be utilized in a user agent, proxy, registrar or imbedded into a service container. JAIN SIP Functionality • JAIN SIP supports the SIP protocol functionality described in RFC 3261. • JAIN SIP the following SIP extensions; – RFC 2976 allows for the carrying of session related control information that is generated during a session. – RFC 3262 provide information on progress of the request processing. – RFC 3265 the ability to request asynchronous notification of events. – RFC 3311 allows the caller or callee to provide updated session information before a final response. – RFC 3326 the ability to know why a SIP request was issued. – RFC 3428 allows the transfer of Instant Messages. – RFC 3515 requests that the recipient refer to a resource provided in the request. JAIN SIP Object Architecture - **Setup Function**: createListener() - **SIP Listener**: getInstance() - **SIP Factory**: createStack() - **SIP Provider**: createProvider() - **SIP Stack**: Proprietary SIP Stack Network SIP Implementation Structure SipProvider Messages Events Messages Events Messages Events SipStack Implementation Dialog Dialog Transaction Transaction Transaction Req Res Req Req Res Parser Encoder Network (Raw Bytes) SipStack Interface - Manages Listening Points and Providers. - SipStack associated with an IP address. - Can have multiple Listening points. - Application can have multiple SipStacks. - Cannot be deleted once created. - Instantiated by the SipFactory and initialized with a property set. - `javax.sip.*` properties are reserved and names defined for stack configuration properties. - Defines retransmission settings. - Defines router information. Retransmissions - JAIN SIP provides a convenience function that ensures all retransmissions are handled by the JAIN SIP implementation. - Reduces complexity for applications acting as user agents. - Reduces complexity for integrating JAIN SIP as a base implementation for a SIP Servlet container or a JAIN SLEE implementation. - Configured via Java properties on the SipStack Interface. - Default is off. - The default handling of message retransmissions in JAIN SIP is dependent on the application. - Stateful proxy applications need not be concerned with retransmissions as these are handled by JAIN SIP. - Typically User Agent applications must handle retransmissions of ACK’s and 2xx Responses. Stack Properties - **IP_ADDRESS** - Sets the IP Address of the SipStack. This property is mandatory. - **STACK_NAME** - Sets a user friendly name to identify the underlying stack implementation. This property is mandatory. - **OUTBOUND_PROXY** - Sets the outbound proxy of the SIP Stack. - **ROUTER_PATH** - Sets the fully qualified classpath to the application supplied Router object that determines how to route messages before a dialog is established. - **EXTENSION_METHODS** - This configuration value informs the underlying implementation of supported extension methods that create new dialog's. - **RETRANSMISSION_FILTER** - A helper function for User Agents that enables the stack to handle retransmission of ACK Requests, 1XX and 2XX Responses to INVITE transactions for the application. SipProvider Interface - Register a SipListener to the SipProvider. - Notifies registered Listener of Events - De-register a SipListener from the SipProvider. - Once de-registered, no longer receive Events from SipProvider. - Client and Server Transaction creation methods. - For sending Request and Response messages statefully. - CallIdHeader creation method. - Send Requests and Responses statelessly. - Listening Point manipulation methods. - Only one provider per listening point. Responsibilities of JAIN SIP • Provide methods to format SIP messages. • The ability for an application to send and receive SIP messages. • Parse incoming messages and enable application access to fields via a standardized Java interface. • Invoke appropriate application handlers when protocol significant – Message arrivals and Transaction time-outs • Provide Transaction support and manage Transaction state and lifetime on behalf of a user application. • Provide Dialog support and manage Dialog state and lifetime on behalf on a user application. SipListener Interface • A single SipListener per SipStack which implies a single Listener in the architecture – All SipProviders associated to a Sipstack have the same SipListener. • Process Request's either statefully or statelessly dependent on application logic. • Process Response's to a recently sent Requests statefully. • Process Transaction timeouts and retransmits Timer events. – Transaction processing notifications Responsibilities of the Application • Application registers an implementation of the SipListener interface to interact with the SIP Stack. • Application must register with the SipProvider for all messaging capabilities with the stack. – Application requests transactions for stateful messaging. – Application sends stateless messages. – Access stack objects. • Application receives messages from the stack as Events via the SipListener interface. JAIN SIP Messaging Architecture Event Model - The architecture is developed for the J2SE environment therefore is event based utilizing the Listener/Provider event model. - There is a direct reference between the event provider and event consumer - Event consumer must register with the event provider - Events encapsulate incoming Requests and Responses. - Event Model is one way i.e. Application doesn’t send out events, it sends out messages. - The event model is asynchronous in nature using transactional identifiers to correlate messages. - The SipListener represents the event consumer and listens for incoming Events that encapsulate messages that may be responses to initiated dialogs or new incoming dialogs. - The SipProvider is the event provider who receives messages from the network and passes them to the application as events. Packages - General package - Defines the architectural interfaces, the transaction and dialog interfaces and the event objects of the specification. - Address package - Address package contains a generic URI wrapper and defines SIP URI and Tel URIs interfaces. - Message package - Defines the interfaces necessary for the Request and Response messages. - Header packages - Header package defines interfaces for all the supported headers and extension headers Factories JAIN SIP defines four different factories each with respective responsibilities, namely: - **SipFactory** - This interface defines methods to create new Stack objects and other factory objects. - **AddressFactory** - This interface defines methods to create SipURI’s and TelURL’s. - **HeaderFactory** - This interface defines methods to create new Headers objects. - **MessageFactory** - This interface defines methods to create new Request and Response objects. Messages and Headers Messages - There are two type of messages in SIP, which JAIN SIP defines as Interfaces: - Request messages are sent from the client to server. - They contain a specific method type that identifies the type of Request. - A Request-URI which indicates the user or service to which this request is being addressed. - Response messages are sent from server to client in response to a Request. - They contain a specific status code that identifies the type of Response. - A Request-URI which indicates the user or service to which this request is being addressed. - A reason phrase that is intended for the human user. - Messages may contain multiple Headers and Headers of the same type. - The order of all Headers within a message is significant, i.e. Headers which are hop-by-hop must appear before any Headers which are end-to-end. - A Message Body contains a session description. - JAIN SIP defines this format an Object which allows the body to be a String or an Object type defined the Session Description Protocol (SDP) JSR specification and also a byte array. Request Message Types The following request messages are defined by the core SIP protocol: - **INVITE** - Invites a participant to a session - **BYE** - Ends a client’s participation in a session - **CANCEL** - Terminates a search - **OPTIONS** - Queries a participant about their media capabilities - **ACK** - For reliability and call acceptance - **REGISTER** - Informs a SIP server about the location of a user Request Message Types The following request messages are defined by various SIP extensions: - **INFO** - Session related control information generated during a session. - **PRACK** - For reliability of provisional responses. - **UPDATE** - Update a session without impacting the state of a dialog. - **SUBSCRIBE** - Request notification from remote nodes when certain events occur. - **NOTIFY** - Notification from remote nodes when certain events occur. - **MESSAGE** - For sending instant messages. - **REFER** - Refer to a resource provided in the request. Headers • SIP header fields are similar to HTTP header fields in both syntax and semantics. • JAIN SIP models each SIP header as a specific interface as opposed to have a single generic interface to handle all header information. – Each interface specifies the Headers acceptable parameters. – It is deemed more explicit for protocol support to define protocol characteristics as opposed to generic interfaces. • This specification supports all the headers defined in RFC 3261 and other headers introduced by supporting the following additional RFC's: – RFC3265 - AllowEventsHeader, EventHeader and SubscriptionStateHeader to support the event notification framework. – RFC3326 - ReasonHeader to support information on why the request was issued. – RFC3515 - ReferToHeader to support recipients to refer requests to another resource JAIN SIP Extensible by Design • SIP Extensions described in internet drafts and RFCs typically define: – New SIP Methods • New dialog creating methods – New SIP Headers • JAIN SIP defines an extensible framework to support new headers standardized for SIP: – New SIP methods can be set using the string method field of a request. – An application informs the stack of dialog creating methods, by specifying the method name to the EXTENSION_METHOD property of the SipStack configuration. • JAIN SIP defines an extensible framework to support new headers standardized for SIP: – Defines an ExtensionHeader interface that contains the header name and header value attribute pair. – Can be created and accessed by name. Transactions and Dialogs SIP Transactions A SIP transaction consists of a single request and any responses to that request. Transaction Support • JAIN SIP standardizes the interface to the generic transactional model defined by the SIP protocol – JAIN SIP models both Client and Server Transactions as Interfaces. • Transaction is created on incoming Request or may be created to send outgoing request. – When a Request is sent out statefully, application must request a ClientTransaction – When a new Request arrives, application determines whether to handle request via a ServerTransaction – When a Request in an existing dialog arrives the stack automatically associates it to a ServerTransaction • When a response arrives, the Stack possibly associates a previously created ClientTransaction with the response – May be stray • Messages are passed to the SipProvider in order to generate a new transaction. This transaction can be used to send the message onto the network • Implementation manages the association between Transactions and Dialogs. Dialog Support - A Dialog is a peer to peer association between communicating SIP endpoints. - The dialog represents a context in which to interpret SIP messages. - Dialogs are never directly created by the Application. - Dialogs are established by Dialog creating Transactions (INVITE, SUBSCRIBE…) and are managed by the stack. - Dialog deletion may be under application control. - Though not generally recommended. - Dialogs are used to maintain data needed for further message transmissions within the dialog - Route Sets, Sequence Numbers, URI’s of the parties in the dialog. - Dialogs have a state machine - Early, Confirmed, Completed and Terminated. - Transactions may belong to a Dialog - Dialog state changes as a result of changes in Transaction State. - Access to dialog functionality from the transaction interface. 3PCC Example Third Party Call Control – 3PCC • 3PCC refers to the general ability to establish and manipulate calls between other parties. • Establishment of these calls is orchestrated by a third party, referred to as the controller: – A controller is a SIP User Agent that wishes to create a session between two other user agents. • 3PCC is often used for: – operator services i.e. the operator creates a call that connects two participants together. – conferencing. 3PCC Example using JAIN SIP - **SipListener** - `createReq(INVITE,-)` - `createClientTransaction(inviteA)` - `sendRequest()` - **SipFactory** - `createReq(INVITE, offerA)` - `createClientTransaction(inviteB)` - `sendRequest()` - **SipProvider** - `ACK(offerA)` - `ACK(offerB)` - **Client Transaction** - `new()` - `null` - **Dialog A** - `new()` - `INVITE(SipListener, A)` - `confirmed` - `200OK(offerA)` - **Dialog B** - `new()` - `null` - `INVITE(SipListener, B)` - `confirmed` - `200OK(offerB)` - **SIP Party A** - `ACK(offerB)` - **SIP Party B** - `createReq(re-INVITE) – setBody(offerB)` - `createClientTransaction(inviteC)` - `sendRequest(clientTransC)` - `ACK(offerB)` - **RTP** ## Latest Specification Updates ### JAIN SIP v1.0 - RFC2543 Supported. - J2SE 1.3 and above. - Transactions referenced by long. - Transaction state is not visible to application. - No explicit Dialog Support. - Stack Configuration not defined. ### JAIN SIP v1.1 - RFC3261 Supported. - J2SE 1.4 and above. - Transaction interfaces defined. - Transaction/Dialog state can be read by application. - Dialog interface defined and managed by stack. - Stack Configured with defined properties. Presentation Outline - What is SIP? - Why create JAIN SIP? - Introduction to JAIN SIP - Developer Code Snippets - Implementation Used-Cases try { Properties properties = new Properties(); properties.setProperty("javax.sip.IP_ADDRESS", "129.6.55.181"); properties.setProperty("javax.sip.OUTBOUND_PROXY", "129.6.55.182:5070/UDP"); ......// Other initialization properties. try { sipStack = sipFactory.createSipStack(properties); } catch(SipException e) { System.exit(-1); } } try { SipURI requestURI = addressFactory.createSipURI(toUser, toSipAddress); // … Create other headers Request request = messageFactory.createRequest(requestURI, Request.INVITE, callIdHeader, cSeqHeader, fromHeader, toHeader, viaHeaders, maxForwards); } Application - Sending Requests Send outgoing messages: ``` try { // Create the client transaction ClientTransaction inviteTid = sipProvider.getNewClientTransaction(request); // send the request inviteTid.sendRequest(); } ``` Application – Processing Requests Handle incoming messages as Events: ```java try { public void processRequest(RequestEvent requestEvent) { Request request = requestEvent.getRequest(); ServerTransaction st = requestEvent.getServerTransaction(); // do request specific processing here } } ``` Presentation Outline • What is SIP? • Why create JAIN SIP? • Introduction to JAIN SIP • Developer Code Snippets • Implementation Used-Cases JAIN SIP for Instant Messaging - Suitable for building IM and Presence Clients and Servers. - API supports the required methods and Headers. - Creates and manages Dialogs for SUBSCRIBE and MESSAGE methods. - NIST-SIP JAIN IM Client SipListener is about 1100 LOC. - Interoperates with Microsoft Messenger IM. http://jain-sip-presence-proxy.dev.java.net JAIN SIP for Proxy Servers - Facilities construction of Proxy Servers - Stateless, Transaction-stateful, and Dialog-stateful operation. - Access to Dialog/Transaction state and route tables. - Extensibility and application controlled Routing. - Deep copy semantics for cloning. - Incorporates IM + Presence Support http://jain-sip-presence-proxy.dev.java.net JAIN SIP for Telephony - SIP COMMUNICATOR • Ideal for building telephony applications. • API provides a complete set of functionality for managing calls. • Spares the application the burden of managing dialogs and transactions. • A complete example of an audio/video telephony application – Uses JAIN SIP RI and JMF • Interoperates with Microsoft Windows Messenger. http://sip-communicator.dev.java.net JAIN SIP Reference Implementation - In the public domain. - Includes trace visualization tools. - Footprint - About 46000 LOC. - Jar file about 355 Kb - 3Mb of memory after running a few requests. http://jain-sip.dev.java.net JAIN SIP Resources - JAIN SIP Specification: http://jcp.org/jsr/detail/032.jsp - JAIN SIP Discussion List: http://archives.java.sun.com/jain-sip-interest.html - JAIN SIP Collaboration Project: http://jain-sip.dev.java.net - SIP-Communicator Collaboration Project: http://sip-communicator.dev.java.net - SIP-Presence-Proxy Collaboration Project: http://jain-sip-presence-proxy.dev.java.net JSR 32 http://jcp.org/en/jsr/detail?id=32 Subscribe to: http://archives.java.sun.com/jain-sip-interest.html
{"Source-Url": "http://www.tti.unipa.it/pg/pg/Teaching_files/02%20eser.%20JAIN-SIP-Tutorial.pdf", "len_cl100k_base": 4526, "olmocr-version": "0.1.50", "pdf-total-pages": 49, "total-fallback-pages": 0, "total-input-tokens": 67730, "total-output-tokens": 6582, "length": "2e12", "weborganizer": {"__label__adult": 0.0002701282501220703, "__label__art_design": 0.00010895729064941406, "__label__crime_law": 0.00025177001953125, "__label__education_jobs": 0.00025177001953125, "__label__entertainment": 3.7550926208496094e-05, "__label__fashion_beauty": 9.113550186157228e-05, "__label__finance_business": 0.0001041889190673828, "__label__food_dining": 0.00021648406982421875, "__label__games": 0.0003733634948730469, "__label__hardware": 0.0007395744323730469, "__label__health": 0.00019359588623046875, "__label__history": 8.779764175415039e-05, "__label__home_hobbies": 3.361701965332031e-05, "__label__industrial": 0.00017881393432617188, "__label__literature": 8.088350296020508e-05, "__label__politics": 0.0001322031021118164, "__label__religion": 0.00026297569274902344, "__label__science_tech": 0.001914024353027344, "__label__social_life": 5.567073822021485e-05, "__label__software": 0.00817108154296875, "__label__software_dev": 0.98583984375, "__label__sports_fitness": 0.00022518634796142575, "__label__transportation": 0.0002211332321166992, "__label__travel": 0.00013971328735351562}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21017, 0.0077]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21017, 0.33606]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21017, 0.81129]], "google_gemma-3-12b-it_contains_pii": [[0, 112, false], [112, 337, null], [337, 486, null], [486, 1041, null], [1041, 1587, null], [1587, 1728, null], [1728, 2778, null], [2778, 2921, null], [2921, 3341, null], [3341, 4046, null], [4046, 4268, null], [4268, 4494, null], [4494, 4944, null], [4944, 5656, null], [5656, 6470, null], [6470, 6969, null], [6969, 7524, null], [7524, 7959, null], [7959, 8415, null], [8415, 8447, null], [8447, 9264, null], [9264, 9735, null], [9735, 10221, null], [10221, 10242, null], [10242, 11337, null], [11337, 11766, null], [11766, 12342, null], [12342, 13283, null], [13283, 14018, null], [14018, 14043, null], [14043, 14143, null], [14143, 15085, null], [15085, 15928, null], [15928, 15941, null], [15941, 16406, null], [16406, 17149, null], [17149, 17638, null], [17638, 17783, null], [17783, 18161, null], [18161, 18427, null], [18427, 18673, null], [18673, 18999, null], [18999, 19140, null], [19140, 19494, null], [19494, 19857, null], [19857, 20268, null], [20268, 20505, null], [20505, 20909, null], [20909, 21017, null]], "google_gemma-3-12b-it_is_public_document": [[0, 112, true], [112, 337, null], [337, 486, null], [486, 1041, null], [1041, 1587, null], [1587, 1728, null], [1728, 2778, null], [2778, 2921, null], [2921, 3341, null], [3341, 4046, null], [4046, 4268, null], [4268, 4494, null], [4494, 4944, null], [4944, 5656, null], [5656, 6470, null], [6470, 6969, null], [6969, 7524, null], [7524, 7959, null], [7959, 8415, null], [8415, 8447, null], [8447, 9264, null], [9264, 9735, null], [9735, 10221, null], [10221, 10242, null], [10242, 11337, null], [11337, 11766, null], [11766, 12342, null], [12342, 13283, null], [13283, 14018, null], [14018, 14043, null], [14043, 14143, null], [14143, 15085, null], [15085, 15928, null], [15928, 15941, null], [15941, 16406, null], [16406, 17149, null], [17149, 17638, null], [17638, 17783, null], [17783, 18161, null], [18161, 18427, null], [18427, 18673, null], [18673, 18999, null], [18999, 19140, null], [19140, 19494, null], [19494, 19857, null], [19857, 20268, null], [20268, 20505, null], [20505, 20909, null], [20909, 21017, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 21017, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21017, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21017, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21017, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21017, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21017, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21017, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21017, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21017, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21017, null]], "pdf_page_numbers": [[0, 112, 1], [112, 337, 2], [337, 486, 3], [486, 1041, 4], [1041, 1587, 5], [1587, 1728, 6], [1728, 2778, 7], [2778, 2921, 8], [2921, 3341, 9], [3341, 4046, 10], [4046, 4268, 11], [4268, 4494, 12], [4494, 4944, 13], [4944, 5656, 14], [5656, 6470, 15], [6470, 6969, 16], [6969, 7524, 17], [7524, 7959, 18], [7959, 8415, 19], [8415, 8447, 20], [8447, 9264, 21], [9264, 9735, 22], [9735, 10221, 23], [10221, 10242, 24], [10242, 11337, 25], [11337, 11766, 26], [11766, 12342, 27], [12342, 13283, 28], [13283, 14018, 29], [14018, 14043, 30], [14043, 14143, 31], [14143, 15085, 32], [15085, 15928, 33], [15928, 15941, 34], [15941, 16406, 35], [16406, 17149, 36], [17149, 17638, 37], [17638, 17783, 38], [17783, 18161, 39], [18161, 18427, 40], [18427, 18673, 41], [18673, 18999, 42], [18999, 19140, 43], [19140, 19494, 44], [19494, 19857, 45], [19857, 20268, 46], [20268, 20505, 47], [20505, 20909, 48], [20909, 21017, 49]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21017, 0.0]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
5bec401c50ae6d0016c5af7a738410add1b9904b
The Web Origin Concept Abstract This document defines the concept of an "origin", which is often used as the scope of authority or privilege by user agents. Typically, user agents isolate content retrieved from different origins to prevent malicious web site operators from interfering with the operation of benign web sites. In addition to outlining the principles that underlie the concept of origin, this document details how to determine the origin of a URI and how to serialize an origin into a string. It also defines an HTTP header field, named "Origin", that indicates which origins are associated with an HTTP request. Status of This Memo This is an Internet Standards Track document. This document is a product of the Internet Engineering Task Force (IETF). It represents the consensus of the IETF community. It has received public review and has been approved for publication by the Internet Engineering Steering Group (IESG). Further information on Internet Standards is available in Section 2 of RFC 5741. Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at http://www.rfc-editor.org/info/rfc6454. Copyright Notice Copyright (c) 2011 IETF Trust and the persons identified as the document authors. All rights reserved. This document is subject to BCP 78 and the IETF Trust’s Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. Table of Contents 1. Introduction ........................................... 3 2. Conventions ........................................... 3 2.1. Conformance Criteria ............................. 3 2.2. Syntax Notation .................................. 4 2.3. Terminology ...................................... 4 3. Principles of the Same-Origin Policy .................. 4 3.1. Trust ........................................... 5 3.1.1. Pitfalls ...................................... 5 3.2. Origin .......................................... 6 3.2.1. Examples .................................... 7 3.3. Authority ....................................... 7 3.3.1. Pitfalls ...................................... 8 3.4. Policy .......................................... 8 3.4.1. Object Access ................................. 8 3.4.2. Network Access ............................... 9 3.4.3. Pitfalls ...................................... 9 3.5. Conclusion ..................................... 10 4. Origin of a URI ....................................... 10 5. Comparing Origins .................................... 11 6. Serializing Origins .................................. 11 6.1. Unicode Serialization of an Origin ............... 12 6.2. ASCII Serialization of an Origin ................. 12 7. The HTTP Origin Header Field ........................ 13 7.1. Syntax .......................................... 13 7.2. Semantics ....................................... 13 7.3. User Agent Requirements ......................... 14 8. Security Considerations ............................... 14 8.1. Reliance on DNS .................................. 15 8.2. Divergent Units of Isolation ...................... 15 8.3. Ambient Authority ............................... 16 8.4. IDNA Dependency and Migration .................... 16 9. IANA Considerations ................................... 17 10. References ........................................... 17 10.1. Normative References ............................ 17 10.2. Informative References ......................... 18 Appendix A. Acknowledgements ........................... 20 1. Introduction User agents interact with content created by a large number of authors. Although many of those authors are well-meaning, some authors might be malicious. To the extent that user agents undertake actions based on content they process, user agent implementors might wish to restrict the ability of malicious authors to disrupt the confidentiality or integrity of other content or servers. As an example, consider an HTTP user agent that renders HTML content retrieved from various servers. If the user agent executes scripts contained in those documents, the user agent implementor might wish to prevent scripts retrieved from a malicious server from reading documents stored on an honest server, which might, for example, be behind a firewall. Traditionally, user agents have divided content according to its "origin". More specifically, user agents allow content retrieved from one origin to interact freely with other content retrieved from that origin, but user agents restrict how that content can interact with content from another origin. This document describes the principles behind the so-called same-origin policy as well as the "nuts and bolts" of comparing and serializing origins. This document does not describe all the facets of the same-origin policy, the details of which are left to other specifications, such as HTML [HTML] and WebSockets [RFC6455], because the details are often application-specific. 2. Conventions 2.1. Conformance Criteria The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC2119]. Requirements phrased in the imperative as part of algorithms (such as "strip any leading space characters" or "return false and abort these steps") are to be interpreted with the meaning of the key word ("MUST", "SHOULD", "MAY", etc.) used in introducing the algorithm. Conformance requirements phrased as algorithms or specific steps can be implemented in any manner, so long as the end result is equivalent. In particular, the algorithms defined in this specification are intended to be easy to understand and are not intended to be performant. 2.2. Syntax Notation This specification uses the Augmented Backus-Naur Form (ABNF) notation of [RFC5234]. The following core rules are included by reference, as defined in [RFC5234], Appendix B.1: ALPHA (letters), CR (carriage return), CRLF (CR LF), CTL (controls), DIGIT (decimal 0-9), DQUOTE (double quote), HEXDIG (hexadecimal 0-9/A-F/a-f), LF (line feed), OCTET (any 8-bit sequence of data), SP (space), HTAB (horizontal tab), CHAR (any US-ASCII character), VCHAR (any visible US-ASCII character), and WSP (whitespace). The OWS rule is used where zero or more linear whitespace octets might appear. OWS SHOULD either not be produced or be produced as a single SP. Multiple OWS octets that occur within field-content SHOULD either be replaced with a single SP or transformed to all SP octets (each octet other than SP replaced with SP) before interpreting the field value or forwarding the message downstream. ``` OWS = *( SP / HTAB / obs-fold ) ; "optional" whitespace obs-fold = CRLF ( SP / HTAB ) ; obsolete line folding ``` 2.3. Terminology The terms "user agent", "client", "server", "proxy", and "origin server" have the same meaning as in the HTTP/1.1 specification ([RFC2616], Section 1.3). A globally unique identifier is a value that is different from all other previously existing values. For example, a sufficiently long random string is likely to be a globally unique identifier. If the origin value never leaves the user agent, a monotonically increasing counter local to the user agent can also serve as a globally unique identifier. 3. Principles of the Same-Origin Policy Many user agents undertake actions on behalf of remote parties. For example, HTTP user agents follow redirects, which are instructions from remote servers, and HTML user agents expose rich Document Object Model (DOM) interfaces to scripts retrieved from remote servers. Without any security model, user agents might undertake actions detrimental to the user or to other parties. Over time, many web-related technologies have converged towards a common security model, known colloquially as the "same-origin policy". Although this security model evolved largely organically, the same-origin policy can be understood in terms of a handful of key concepts. This section presents those concepts and provides advice about how to use these concepts securely. 3.1. Trust The same-origin policy specifies trust by URI. For example, HTML documents designate which script to run with a URI: <script src="https://example.com/library.js"></script> When a user agent processes this element, the user agent will fetch the script at the designated URI and execute the script with the privileges of the document. In this way, the document grants all the privileges it has to the resource designated by the URI. In essence, the document declares that it trusts the integrity of information retrieved from that URI. In addition to importing libraries from URIs, user agents also send information to remote parties designated by URI. For example, consider the HTML form element: <form method="POST" action="https://example.com/login"> ... <input type="password"> ... </form> When the user enters his or her password and submits the form, the user agent sends the password to the network endpoint designated by the URI. In this way, the document exports its secret data to that URI, in essence declaring that it trusts the confidentiality of information sent to that URI. 3.1.1. Pitfalls When designing new protocols that use the same-origin policy, make sure that important trust distinctions are visible in URIs. For example, if both Transport Layer Security (TLS) and non-TLS protected resources use the "http" URI scheme (as in [RFC2817]), a document would be unable to specify that it wishes to retrieve a script only over TLS. By using the "https" URI scheme, documents are able to indicate that they wish to interact with resources that are protected from active network attackers. 3.2. Origin In principle, user agents could treat every URI as a separate protection domain and require explicit consent for content retrieved from one URI to interact with another URI. Unfortunately, this design is cumbersome for developers because web applications often consist of a number of resources acting in concert. Instead, user agents group URIs together into protection domains called "origins". Roughly speaking, two URIs are part of the same origin (i.e., represent the same principal) if they have the same scheme, host, and port. (See Section 4 for full details.) Q: Why not just use the host? A: Including the scheme in the origin tuple is essential for security. If user agents did not include the scheme, there would be no isolation between http://example.com and https://example.com because the two have the same host. However, without this isolation, an active network attacker could corrupt content retrieved from http://example.com and have that content instruct the user agent to compromise the confidentiality and integrity of content retrieved from https://example.com, bypassing the protections afforded by TLS [RFC5246]. Q: Why use the fully qualified host name instead of just the "top-level" domain? A: Although the DNS has hierarchical delegation, the trust relationships between host names vary by deployment. For example, at many educational institutions, students can host content at https://example.edu/~student/, but that does not mean a document authored by a student should be part of the same origin (i.e., inhabit the same protection domain) as a web application for managing grades hosted at https://grades.example.edu/. The example.edu deployment illustrates that grouping resources by origin does not always align perfectly with every deployment scenario. In this deployment, every student’s web site inhabits the same origin, which might not be desirable. In some sense, the origin granularity is a historical artifact of how the security model evolved. 3.2.1. Examples All of the following resources have the same origin: http://example.com/ http://example.com:80/ http://example.com/path/file Each of the URIs has the same scheme, host, and port components. Each of the following resources has a different origin from the others. http://example.com/ http://example.com:8080/ http://www.example.com/ https://example.com:80/ https://example.com/ http://example.org/ http://ietf.org/ In each case, at least one of the scheme, host, and port component will differ from the others in the list. 3.3. Authority Although user agents group URIs into origins, not every resource in an origin carries the same authority (in the security sense of the word "authority", not in the [RFC3986] sense). For example, an image is passive content and, therefore, carries no authority, meaning the image has no access to the objects and resources available to its origin. By contrast, an HTML document carries the full authority of its origin, and scripts within (or imported into) the document can access every resource in its origin. User agents determine how much authority to grant a resource by examining its media type. For example, resources with a media type of image/png are treated as images, and resources with a media type of text/html are treated as HTML documents. When hosting untrusted content (such as user-generated content), web applications can limit that content’s authority by restricting its media type. For example, serving user-generated content as image/png is less risky than serving user-generated content as text/html. Of course, many web applications incorporate untrusted content in their HTML documents. If not done carefully, these applications risk leaking their origin’s authority to the untrusted content, a vulnerability commonly known as cross-site scripting. 3.3.1. Pitfalls When designing new pieces of the web platform, be careful not to grant authority to resources irrespective of media type. Many web applications serve untrusted content with restricted media types. A new web platform feature that grants authority to these pieces of content risks introducing vulnerabilities into existing applications. Instead, prefer to grant authority to media types that already possess the origin’s full authority or to new media types designed specifically to carry the new authority. In order to remain compatible with servers that supply incorrect media types, some user agents employ "content sniffing" and treat content as if it had a different media type than the media type supplied by the server. If not done carefully, content sniffing can lead to security vulnerabilities because user agents might grant low-authority media types, such as images, the privileges of high-authority media types, such as HTML documents [SNIFF]. 3.4. Policy Generally speaking, user agents isolate different origins and permit controlled communication between origins. The details of how user agents provide isolation and communication vary depending on several factors. 3.4.1. Object Access Most objects (also known as application programming interfaces or APIs) exposed by the user agent are available only to the same origin. Specifically, content retrieved from one URI can access objects associated with content retrieved from another URI if, and only if, the two URIs belong to the same origin, e.g., have the same scheme, host, and port. There are some exceptions to this general rule. For example, some parts of HTML’s Location interface are available across origins (e.g., to allow for navigating other browsing contexts). As another example, HTML’s postMessage interface is visible across origins explicitly to facilitate cross-origin communication. Exposing objects to foreign origins is dangerous and should be done only with great care because doing so exposes these objects to potential attackers. 3.4.2. Network Access Access to network resources varies depending on whether the resources are in the same origin as the content attempting to access them. Generally, reading information from another origin is forbidden. However, an origin is permitted to use some kinds of resources retrieved from other origins. For example, an origin is permitted to execute script, render images, and apply style sheets from any origin. Likewise, an origin can display content from another origin, such as an HTML document in an HTML frame. Network resources can also opt into letting other origins read their information, for example, using Cross-Origin Resource Sharing [CORS]. In these cases, access is typically granted on a per-origin basis. Sending information to another origin is permitted. However, sending information over the network in arbitrary formats is dangerous. For this reason, user agents restrict documents to sending information using particular protocols, such as in an HTTP request without custom headers. Expanding the set of allowed protocols, for example, by adding support for WebSockets, must be done carefully to avoid introducing vulnerabilities [RFC6455]. 3.4.3. Pitfalls Whenever user agents allow one origin to interact with resources from another origin, they invite security issues. For example, the ability to display images from another origin leaks their height and width. Similarly, the ability to send network requests to another origin gives rise to cross-site request forgery vulnerabilities [CSRF]. However, user agent implementors often balance these risks against the benefits of allowing the cross-origin interaction. For example, an HTML user agent that blocked cross-origin network requests would prevent its users from following hyperlinks, a core feature of the web. When adding new functionality to the web platform, it can be tempting to grant a privilege to one resource but to withhold that privilege from another resource in the same origin. However, withholding privileges in this way is ineffective because the resource without the privilege can usually obtain the privilege anyway because user agents do not isolate resources within an origin. Instead, privileges should be granted or withheld from origins as a whole (rather than discriminating between individual resources within an origin) [BOFGO]. 3.5. Conclusion The same-origin policy uses URIs to designate trust relationships. URIs are grouped together into origins, which represent protection domains. Some resources in an origin (e.g., active content) are granted the origin’s full authority, whereas other resources in the origin (e.g., passive content) are not granted the origin’s authority. Content that carries its origin’s authority is granted access to objects and network resources within its own origin. This content is also granted limited access to objects and network resources of other origins, but these cross-origin privileges must be designed carefully to avoid security vulnerabilities. 4. Origin of a URI The origin of a URI is the value computed by the following algorithm: 1. If the URI does not use a hierarchical element as a naming authority (see [RFC3986], Section 3.2) or if the URI is not an absolute URI, then generate a fresh globally unique identifier and return that value. NOTE: Running this algorithm multiple times for the same URI can produce different values each time. Typically, user agents compute the origin of, for example, an HTML document once and use that origin for subsequent security checks rather than recomputing the origin for each security check. 2. Let uri-scheme be the scheme component of the URI, converted to lowercase. 3. If the implementation doesn’t support the protocol given by uri-scheme, then generate a fresh globally unique identifier and return that value. 4. If uri-scheme is "file", the implementation MAY return an implementation-defined value. NOTE: Historically, user agents have granted content from the file scheme a tremendous amount of privilege. However, granting all local files such wide privileges can lead to privilege escalation attacks. Some user agents have had success granting local files directory-based privileges, but this approach has not been widely adopted. Other user agents use globally unique identifiers for each file URI, which is the most secure option. 5. Let uri-host be the host component of the URI, converted to lower case (using the i;ascii-casemap collation defined in [RFC4790]). NOTE: This document assumes that the user agent performs Internationalizing Domain Names in Applications (IDNA) processing and validation when constructing the URI. In particular, this document assumes the uri-host will contain only LDH labels because the user agent will have already converted any non-ASCII labels to their corresponding A-labels (see [RFC5890]). For this reason, origin-based security policies are sensitive to the IDNA algorithm employed by the user agent. See Section 8.4 for further discussion. 6. If there is no port component of the URI: 1. Let uri-port be the default port for the protocol given by uri-scheme. Otherwise: 2. Let uri-port be the port component of the URI. 7. Return the triple (uri-scheme, uri-host, uri-port). 5. Comparing Origins Two origins are "the same" if, and only if, they are identical. In particular: - If the two origins are scheme/host/port triples, the two origins are the same if, and only if, they have identical schemes, hosts, and ports. - An origin that is a globally unique identifier cannot be the same as an origin that is a scheme/host/port triple. Two URIs are same-origin if their origins are the same. NOTE: A URI is not necessarily same-origin with itself. For example, a data URI [RFC2397] is not same-origin with itself because data URIs do not use a server-based naming authority and therefore have globally unique identifiers as origins. 6. Serializing Origins This section defines how to serialize an origin to a unicode [Unicode6] string and to an ASCII [RFC20] string. 6.1. Unicode Serialization of an Origin The unicode-serialization of an origin is the value returned by the following algorithm: 1. If the origin is not a scheme/host/port triple, then return the string null (i.e., the code point sequence U+006E, U+0075, U+006C, U+006C) and abort these steps. 2. Otherwise, let result be the scheme part of the origin triple. 3. Append the string "://" to result. 4. Append each component of the host part of the origin triple (converted as follows) to the result, separated by U+002E FULL STOP code points ("."): 1. If the component is an A-label, use the corresponding U-label instead (see [RFC5890] and [RFC5891]). 2. Otherwise, use the component verbatim. 5. If the port part of the origin triple is different from the default port for the protocol given by the scheme part of the origin triple: 1. Append a U+003A COLON code point (":" and the given port, in base ten, to result. 6. Return result. 6.2. ASCII Serialization of an Origin The ascii-serialization of an origin is the value returned by the following algorithm: 1. If the origin is not a scheme/host/port triple, then return the string null (i.e., the code point sequence U+006E, U+0075, U+006C, U+006C) and abort these steps. 2. Otherwise, let result be the scheme part of the origin triple. 3. Append the string "://" to result. 4. Append the host part of the origin triple to result. 5. If the port part of the origin triple is different from the default port for the protocol given by the scheme part of the origin triple: 1. Append a U+003A COLON code point (":"), and the given port, in base ten, to result. 6. Return result. 7. The HTTP Origin Header Field This section defines the HTTP Origin header field. 7.1. Syntax The Origin header field has the following syntax: origin = "Origin:" OWS origin-list-or-null OWS origin-list-or-null = %x6E %x75 %x6C %x6C / origin-list origin-list = serialized-origin *( SP serialized-origin ) serialized-origin = scheme "://" host [ ":" port ] ; <scheme>, <host>, <port> from RFC 3986 7.2. Semantics When included in an HTTP request, the Origin header field indicates the origin(s) that "caused" the user agent to issue the request, as defined by the API that triggered the user agent to issue the request. For example, consider a user agent that executes scripts on behalf of origins. If one of those scripts causes the user agent to issue an HTTP request, the user agent MAY use the Origin header field to inform the server of the security context in which the script was executing when it caused the user agent to issue the request. In some cases, a number of origins contribute to causing the user agents to issue an HTTP request. In those cases, the user agent MAY list all the origins in the Origin header field. For example, if the HTTP request was initially issued by one origin but then later redirected by another origin, the user agent MAY inform the server that two origins were involved in causing the user agent to issue the request. 7.3. User Agent Requirements The user agent MAY include an Origin header field in any HTTP request. The user agent MUST NOT include more than one Origin header field in any HTTP request. Whenever a user agent issues an HTTP request from a "privacy-sensitive" context, the user agent MUST send the value "null" in the Origin header field. NOTE: This document does not define the notion of a privacy-sensitive context. Applications that generate HTTP requests can designate contexts as privacy-sensitive to impose restrictions on how user agents generate Origin header fields. When generating an Origin header field, the user agent MUST meet the following requirements: - Each of the serialized-origin productions in the grammar MUST be the ascii-serialization of an origin. - No two consecutive serialized-origin productions in the grammar can be identical. In particular, if the user agent would generate two consecutive serialized-origins, the user agent MUST NOT generate the second one. 8. Security Considerations The same-origin policy is one of the cornerstones of security for many user agents, including web browsers. Historically, some user agents tried other security models, including taint tracking and exfiltration prevention, but those models proved difficult to implement at the time (although there has been recent interest in reviving some of these ideas). Evaluating the security of the same-origin policy is difficult because the origin concept itself plays such a central role in the security landscape. The notional origin itself is just a unit of isolation, imperfect as are most one-size-fits-all notions. That said, there are some systemic weaknesses, discussed below. 8.1. Reliance on DNS In practice, the same-origin policy relies upon the Domain Name System (DNS) for security because many commonly used URI schemes, such as http, use DNS-based naming authorities. If the DNS is partially or fully compromised, the same-origin policy might fail to provide the security properties required by applications. Some URI schemes, such as https, are more resistant to DNS compromise because user agents employ other mechanisms, such as certificates, to verify the source of content retrieved from these URIs. Other URI schemes, such as the chrome-extension URI scheme (see Section 4.3 of [CRX]), use a public-key-based naming authority and are fully secure against DNS compromise. The web origin concept isolates content retrieved from different URI schemes; this is essential to containing the effects of DNS compromise. 8.2. Divergent Units of Isolation Over time, a number of technologies have converged on the web origin concept as a convenient unit of isolation. However, many technologies in use today, such as cookies [RFC6265], pre-date the modern web origin concept. These technologies often have different isolation units, leading to vulnerabilities. One alternative is to use only the "registry-controlled" domain rather than the fully qualified domain name as the unit of isolation (e.g., "example.com" instead of "www.example.com"). This practice is problematic for a number of reasons and is NOT RECOMMENDED: 1. The notion of a "registry-controlled" domain is a function of human practice surrounding the DNS rather than a property of the DNS itself. For example, many municipalities in Japan run public registries quite deep in the DNS hierarchy. There are widely used "public suffix lists", but these lists are difficult to keep up to date and vary between implementations. 2. This practice is incompatible with URI schemes that do not use a DNS-based naming authority. For example, if a given URI scheme uses public keys as naming authorities, the notion of a "registry-controlled" public key is somewhat incoherent. Worse, some URI schemes, such as nntp, use dotted delegation in the opposite direction from DNS (e.g., alt.usenet.kooks), and others use the DNS but present the labels in the reverse of the usual order (e.g., com.example.www). At best, using "registry-controlled" domains is URI-scheme- and implementation-specific. At worst, differences between URI schemes and implementations can lead to vulnerabilities. 8.3. Ambient Authority When using the same-origin policy, user agents grant authority to content based on its URI rather than based on which objects the content can designate. This disentangling of designation from authority is an example of ambient authority and can lead to vulnerabilities. Consider, for example, cross-site scripting in HTML documents. If an attacker can inject script content into an HTML document, those scripts will run with the authority of the document’s origin, perhaps allowing the script access to sensitive information, such as the user’s medical records. If, however, the script’s authority were limited to those objects that the script could designate, the attacker would not gain any advantage by injecting the script into an HTML document hosted by a third party. 8.4. IDNA Dependency and Migration The security properties of the same-origin policy can depend crucially on details of the IDNA algorithm employed by the user agent. In particular, a user agent might map some international domain names (for example, those involving the U+00DF character) to different ASCII representations depending on whether the user agent uses IDNA2003 [RFC3490] or IDNA2008 [RFC5890]. Migrating from one IDNA algorithm to another might redraw a number of security boundaries, potentially erecting new security boundaries or, worse, tearing down security boundaries between two mutually distrusting entities. Changing security boundaries is risky because combining two mutually distrusting entities into the same origin might allow one to attack the other. 9. IANA Considerations The permanent message header field registry (see [RFC3864]) has been updated with the following registration: Header field name: Origin Applicable protocol: http Status: standard Author/Change controller: IETF Specification document: this specification (Section 7) 10. References 10.1. Normative References [RFC5891] Klensin, J., "Internationalized Domain Names in 10.2. Informative References [BOFGO] Jackson, C. and A. Barth, "Beware of Finer-Grained Working Draft WD-cors-20100727, July 2010, Latest version available at <http://www.w3.org/TR/cors/>. [CRX] Barth, A., Felt, A., Saxena, P., and A. Boodman, "Protecting Browsers from Extension Vulnerabilities", [CSRF] Barth, A., Jackson, C., and J. Mitchell, "Robust Defenses for Cross-Site Request Forgery", 2008, [HTML] Hickson, I., "HTML5", W3C Working Draft WD-html5- 20110525, May 2011, Latest version available at <http://www.w3.org/TR/html5/>. [RFC2397] Masinter, L., "The "data" URL scheme", RFC 2397, August 1998. [RFC2817] Khare, R. and S. Lawrence, "Upgrading to TLS Within HTTP/1.1", RFC 2817, May 2000. [RFC3490] Faltstrom, P., Hoffman, P., and A. Costello, "Internationalizing Domain Names in Applications (IDNA)", Appendix A. Acknowledgements We would like to thank Lucas Adamski, Stephen Farrell, Miguel A. Garcia, Tobias Gondrom, Ian Hickson, Anne van Kesteren, Jeff Hodges, Collin Jackson, Larry Masinter, Alexey Melnikov, Mark Nottingham, Julian Reschke, Peter Saint-Andre, Jonas Sicking, Sid Stamm, Daniel Veditz, and Chris Weber for their valuable feedback on this document. Author’s Address Adam Barth Google, Inc. EMail: ietf@adambarth.com URI: http://www.adambarth.com/
{"Source-Url": "https://tools.ietf.org/pdf/rfc6454.pdf", "len_cl100k_base": 6984, "olmocr-version": "0.1.50", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 38312, "total-output-tokens": 8820, "length": "2e12", "weborganizer": {"__label__adult": 0.0003476142883300781, "__label__art_design": 0.0004892349243164062, "__label__crime_law": 0.0014362335205078125, "__label__education_jobs": 0.0008983612060546875, "__label__entertainment": 0.0001323223114013672, "__label__fashion_beauty": 0.0001800060272216797, "__label__finance_business": 0.000514984130859375, "__label__food_dining": 0.0002620220184326172, "__label__games": 0.0005254745483398438, "__label__hardware": 0.0018281936645507812, "__label__health": 0.0003452301025390625, "__label__history": 0.0004169940948486328, "__label__home_hobbies": 8.511543273925781e-05, "__label__industrial": 0.0004088878631591797, "__label__literature": 0.0005164146423339844, "__label__politics": 0.0004987716674804688, "__label__religion": 0.00044798851013183594, "__label__science_tech": 0.0986328125, "__label__social_life": 0.00012981891632080078, "__label__software": 0.0814208984375, "__label__software_dev": 0.8095703125, "__label__sports_fitness": 0.00021445751190185547, "__label__transportation": 0.0003914833068847656, "__label__travel": 0.00020825862884521484}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34793, 0.04832]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34793, 0.71359]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34793, 0.86032]], "google_gemma-3-12b-it_contains_pii": [[0, 1842, false], [1842, 4053, null], [4053, 6281, null], [6281, 8378, null], [8378, 10291, null], [10291, 12296, null], [12296, 14131, null], [14131, 16175, null], [16175, 18530, null], [18530, 20553, null], [20553, 22253, null], [22253, 23525, null], [23525, 25234, null], [25234, 27082, null], [27082, 29378, null], [29378, 31139, null], [31139, 32520, null], [32520, 34072, null], [34072, 34325, null], [34325, 34793, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1842, true], [1842, 4053, null], [4053, 6281, null], [6281, 8378, null], [8378, 10291, null], [10291, 12296, null], [12296, 14131, null], [14131, 16175, null], [16175, 18530, null], [18530, 20553, null], [20553, 22253, null], [22253, 23525, null], [23525, 25234, null], [25234, 27082, null], [27082, 29378, null], [29378, 31139, null], [31139, 32520, null], [32520, 34072, null], [34072, 34325, null], [34325, 34793, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34793, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34793, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34793, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34793, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34793, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34793, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34793, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34793, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34793, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34793, null]], "pdf_page_numbers": [[0, 1842, 1], [1842, 4053, 2], [4053, 6281, 3], [6281, 8378, 4], [8378, 10291, 5], [10291, 12296, 6], [12296, 14131, 7], [14131, 16175, 8], [16175, 18530, 9], [18530, 20553, 10], [20553, 22253, 11], [22253, 23525, 12], [23525, 25234, 13], [25234, 27082, 14], [27082, 29378, 15], [29378, 31139, 16], [31139, 32520, 17], [32520, 34072, 18], [34072, 34325, 19], [34325, 34793, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34793, 0.0]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
df1a5b78deafbb4bfb557dc5f64e86e251e516d5
Keywords: Generic Programming, Pharo, Dynamically Typed Languages. Abstract: Generic programming is a mechanism for re-using code by abstracting specific types used in classes and programs. In this paper, we present a mechanism for adding generic programming in dynamically typed languages, showing how programmers can benefit from generic programming. Furthermore, we enhance the expressiveness of generic programming with reverse generics, a mechanism for automatically deriving new generic code starting from existing non-generic one. We implemented generics and reverse generics in Pharo Smalltalk, and we successfully used them to solve a problem of reusing unit test cases. This helped us to identify a number of bugs and anomalies in the stream class hierarchy. 1 INTRODUCTION The notion of generic programming has been originally introduced in statically typed programming languages to ease manipulation and reuse of collection classes and algorithms. On the other hand, because of their flexible type systems, dynamically typed object-oriented languages have been left out of the scope of generic programming. The need for generic programming in a dynamically typed setting has been less prominent since no restriction applies over the kind of elements a collection may contain. Furthermore, in a dynamically typed language like Smalltalk, where types are absent in declarations of linguistic entities (like methods, fields, local variables), it might look odd to talk about generic programming. However, there is still a crucial context where types (i.e., class names) appear statically: class references. When creating an object, the class name is hardcoded in the program, and this makes the object instantiation process hard to abstract from. There are well-known patterns to deal with this problem, such as Factory Method (Gamma et al., 1995), Dependency Injection (Fowler, 2004), Virtual classes (Bracha et al., 2010) and ad-hoc linguistic constructs (Cohen and Gil, 2007). However, these mechanisms are effective when future extensions are foreseen. They provide little help in a scenario of unanticipated code evolution in which the programming language does provide dedicated evolutionary construct. This paper is about fixing this issue for dynamically typed languages using generics. As popularized by mainstream statically typed programming languages, generic programming provides a mechanism for defining template classes where some types are variables/parameters and then for providing arguments for those type variables, thus instantiating template classes into concrete and complete classes. In the following, we then use the term template class to refer to a class where some references of a class contained in a class with the type parameter T in G. It is the dual operation of the instantiation operation offered by generics. The generic G may be instantiated into G<T> for a provided class T. Note that, the reverse generics mechanism satisfies the property \[ G = \langle C, T \rangle < T >. \] Finally, an important point is that the original class \[ C \] remains unmodified. Indeed, reverse generics are useful under the basic assumptions that (i) the code to be reused has to be left intact (it cannot be the subject of refactoring) and (ii) the host programming does not implicitly support for looking up classes dynamically (as this is the case in most dynamically languages, ex- cept NewSpeak the supports virtual classes (Bracha et al., 2010)). In particular, we aim at providing, through our implementation of reverse generics, a generative approach, where new generic code is (au- tomatically) generated starting from existing one, and the latter will not be modified at all; for this reason, reverse generics are not, and they do not aim at, a refactoring technique (we also refer to Section 7). This paper extends the Pharo Smalltalk program- ning language with generics and reverse generics. We adapted the reverse generics to cope with the lack of static type information (in (Bergel and Bettini, 2011) reverse generics were studied in the context of statically typed languages such as Java and C++). Re- quirements on type parameters can be defined as a safety net for a sound instantiation; we provide me- chanisms for structural and nominal requirements both for generics and reverse generics in Pharo. The generic mechanisms we implemented do not depend on any Pharo facilities suggesting that gener- ics and reverse generics are likely to be transposable to other dynamically typed languages. Although it has been realized in a dialect of Smalltalk, noth- ing prevents them from being applied to Ruby and Python. Even though similar mechanisms have been proposed in Groovv (Axelsen and Krogdahl, 2009), to the best of our knowledge, this is the first attempt to add a generic-like construct to Smalltalk. (The Groovy case is discussed in the related work section). We employed reverse generics to face a classical code reuse problem. Unit tests in Pharo are inher- ited from Squeak, a Smalltalk dialect that served as a base for Pharo. Those tests have been written in a rather disorganized and ad-hoc fashion. This situation serves as the running example of this paper and was encountered when evolving the Pharo runtime. This helped us identify a number of bugs and anomalies in the stream class hierarchy. The contributions and innovations of this paper are summarized as follows: (i) definition of a mecha- nism for generics in Pharo (Section 2); (ii) description of the reverse generics model in Pharo (Section 4); (iii) description of the implementation of both mech- nisms (Section 5); (iv) applicability to a non triv- ial case study (Section 6). Section 7 summarizes the related work and Section 8 concludes the paper and gives some perspectives on future work. 2 GENERICS IN PHARO This section presents a mechanism for generic pro- gramming for the Pharo/Smalltalk programming lan- guage\(^3\). The presentation of the mechanism is driven by a test-reuse scenario. We will first define a test called GCollectionTest. This test will be free from a particular class of the collection framework. GCollec- test will be instantiated twice, for two different fix- tures based on OrderedCollection and SortedCollection\(^4\). Consider the following code snippet containing a test that verifies elements addition. "Creation of the class T" GenericParameter subclass: #T "Creation of the class GCollectionTest with a variable" TestCase subclass: #GCollectionTest instanceVariableNames: 'collection' "Definition of the setUp method" "It instantiates T and add 3 numbers in it" GCollectionTest >> setUp collection := T new. collection add: 4; add: 5; add: 10. "Definition of the test method testAddition" "It adds an element in the collection defined in setUp" GCollectionTest >> testAddition | initialSize | initialSize := collection size. collection add: 20. self assert: (collection includes: 20). sel assert: (collection size = (initialSize + 1)). GCollectionTest is a pretty standard unit test in the spirit of the xUnit framework (most of the 115 classes that test the Pharo collection library follow a very similar structure). No reference to a collection class is made by GCollectionTest. The method setUp refers to the empty class T. GCollectionTest may be instan- tiated into OrderedCollectionTest and SortedCollectionTest as follows: "Instantiate GCollectionTest and replace occurrences of T by OrderedCollection" (GCollectionTest @ T -> OrderedCollection) as: #OrderedCollectionTest "Replace T by SortedCollection" (GCollectionTest @ T -> SortedCollection) as: #SortedCollectionTest \(^3\)http://www.pharo-project.org \(^4\)A fixture refers to the fixed state used as a baseline for tests. We consider the setUp method only in our situation. The generic class GCollectionTest has been instantiated twice, each time assigning a different class to the parameter $T$. We adopted the convention of defining generic parameter as subclasses of GenericParameter. This convention has a number of advantages, as discussed in Section 5. Since GCollectionTest contains references to $T$, it is a generic class. There is therefore no syntactic distinction between a class and a generic class. GCollectionTest is a generic class only because $T$ is a generic parameter and $T$ is referenced in setUp. Pharo has been extended to support the (@ ... -> ...) as: ... construct. These three operators defines the life cycle of a generic in Pharo. Compared to the Java generics mechanism, generics for Pharo operates on class references instead of types. A class provided as parameter may be freely instantiated, as in the example above. Generics in Pharo are similar to a macro mechanism. In that sense, it shares similarities with C++ templates but using a dynamically typed stance. 3 REQUIREMENTS FOR GENERIC PARAMETERS In order for a generic class to be instantiated, a class needs to be provided for each generic parameter. To prevent generic instantiation to be ill-founded, requirements for a generic parameter may be declared. These requirements are enforced when a generic class is instantiated. Requirements are formulated along nominal and structural definitions of the base code. Nominal Requirements. Static relationship between types may be verified when instantiating a generic class. In the example above, $T$ must be a subtype of Collection. This is specified by defining a method requirements that returns myself inheritsFrom: Collection: \[ T >> \text{requirements} \] \[ \text{"(myself inheritsFrom: Collection)" }\] In that case, instantiation of GCollectionTest raises an error if a class that is not a subclass of Collection is provided as parameter. Note that we introduced the myself pseudo variable. This variable will be bound to the class provided as the generic parameter when being instantiated. The variable self, which references the receiver object, cannot be used within requirements. Structural Requirements. In addition to nominal requirements, a generic parameter may be also structurally constrained. A constraint is satisfied based on the presence of some particular methods. In the example above, a method check may return \[ \text{myself includesSelectors: \{#add:. #includes:. #size\}} \] In that case, only a class that implements the method add:, includes:, and size can be provided in place of $T$. We express a requirement as a boolean expression. The keyword inheritsFrom: and includesSelectors: are predicates. They may therefore be combined using boolean logic operators. For instance, we can express all the above requirements as follows: \[ T >> \text{requirements} \] \[ \text{"(myself inheritsFrom: Collection) and: \{myself includesSelectors:\{#add:. #includes:. #size\}\}} \] Dynamically typed languages favor sophisticated debugging and testing sessions over static source code verification. The lack of static type annotations makes any isolated check on a generic not feasible. Completeness of $T$’s requirements cannot be verified by the compiler, thus, it is up to the programmers to provide a set of satisfactory requirements when defining generic parameters. In practice, this has not been a source of difficulties. 4 REVERSE GENERICS IN PHARO This section presents the reverse generics mechanism in Pharo; we will use a scenario that consists of reusing unit tests. Consider the following class WriteStreamTest taken from an earlier version of Pharo: ClassTestCase subclass: #WriteStreamTest WriteStreamTest >> testIsEmpty WriteStreamTest := WriteStream on: String new. | stream | | stream := WriteStream on: String new. self assert: stream isEmpty. stream nextPut: $a. self deny: stream isEmpty. stream reset. self deny: stream isEmpty. The class WriteStreamTest is defined as a subclass of ClassTestCase, itself a subclass of SUnit’s TestCase. WriteStreamTest defines the method testsEmpty, which checks that a new instance of WriteStream is empty (i.e., answers true when isEmpty is sent). When the character $a$ is added into the stream, it is not empty anymore. And resetting a stream moves the stream pointer at the beginning of the stream, without removing its contents. WriteStreamTest has 5 other similar methods that verify the protocol of WriteStream. We consider that most of the important features of WriteStream are well tested. However, WriteStream has 27 subclasses, which did not receive the same attention in terms of testing. Only 3 of these 27 classes have dedicated tests (FileStream, ReadWriteStream and MultiByteFileStream). Manually scrutinizing these 3 classes reveals that the features tested are different than the one tested in WriteStreamTest⁶. The remaining 24 subclasses of WriteStream are either not tested, or indirectly tested. An example of an indirect testing: CompressedSourceStream is a subclass of WriteStream for which the feature of WriteStream are not tested. CompressedSourceStream is essentially used by the file system with FileDirectory, which is tested in FileDirectoryTest. The situation may be summarized as follows: WriteStream is properly tested and has 22 subclasses, but none of these subclasses have the features defined in WriteStream tested for their particular class. This situation has been addressed by refactoring the collection framework using TraitTest (Ducasse et al., 2009). We make a different assumption here: the base system must be preserved, which implies that a refactoring is not desirable. Refactoring may have some implications on the overall behavior, especially in terms of robustness and efficiency. It has been shown that inheritance is not that helpful in this situation (Flatt and Felleisen, 1998; Bergel et al., 2005). With our implementation of reverse generics in Pharo, a generic class GStreamTest can be obtained from the class WriteStreamTest by turning all references of WriteStream into a parameter that we name T. Generic named: #GStreamTest for: WriteStream -> T @ WriteStreamTest Following a Java-like syntax (Bergel and Bettini, 2011), the above code corresponds to the following reverse generic definition: class GStreamTest<T> = WriteStreamTest>WriteStream<T> The generic GStreamTest is defined as a copy of WriteStreamTest for which all references to WriteStream have been replaced by the type T introduced in the previous section (Section 2). GStreamTest may now be instantiated by replacing all references of WriteStream with untested subclasses of WriteStream as illustrated in Section 2: ⁶According to our experience, this is a general pattern. Often programmers focus essentially on testing added methods and variable when subclassing. "Instantiate GStreamTest and replace occurrences of T by ZipWriteStream" (GStreamTest @ T -> ZipWriteStream) as: #ZipWriteStreamTest "Replace T by HtmlFileStream" (GStreamTest @ T -> HtmlFileStream) as: #HtmlFileStreamTest Figure 1 summarizes the generalization and instantiation of the WriteStreamTest example. Reverse generic targets class instantiation and sending messages to a class. The above scenario could be solved by having a super abstract class in which the class to be tested is returned by a method. This method could then be overridden in subclasses (factory method design pattern (Gamma et al., 1995)). However, this solution is not always the best approach: First, tests of the collection libraries cannot be optimally organized using single inheritance (Ducasse et al., 2009). Second, the code to be reused may not always be editable and modifiable. This is often a desired property to minimize ripple effects across packages versions. 4.1 Requirements when Generalizing We have previously seen that requirements may be defined on generic parameters (Section 3). These requirements equally apply when generalizing a class. Turning references of WriteStream into a parameter T may be constrained with the following requirements: T >> requirements "(myself inheritsFrom: Stream) and: [ myself includesSelectors: {#isEmpty . #reset} ]" Further requirements could be that the parameter T understands the class-side message on:, and the instance-side message nextPut:. However, this will be redundant with the requirement myself inheritsFrom: Stream, since Stream defines the method nextPut: and on:. Requirements may also be set for class methods, e.g.,myself class includesSelector: {#new: } makes the presence of the class method new: mandatory. 4.2 Capturing Inherited Methods Instantiating a generic G, which is obtained from generalizing a class C, makes copies of C with connections to different classes. This process may also copy superclasses of C when methods defined in superclasses need to have new references of classes. This situation is illustrated in Figure 2. A different example is adopted in this figure. The class AbstractFactory has an abstract method create. PointFactory is a subclass of it that creates instances of Point (not represented on the figure). This class is subclassed into EnhPointFactory that overrides to count the number of instances that have been created. Consider the generic \[ \text{GENhFactory}<T> = \text{EnhPointFactory}>\text{Point}<. \] This generic may be instantiated with a class Car to produce cars instead of points: \[ \text{CarFactory} = \text{GENhFactory}<\text{Car}>. \] The class Point is referenced by the superclass of EnhPointFactory. Generalizing and instantiating EnhPointFactory has to turn the Point reference contained in PointFactory into Car. This is realized in reverse generics by automatically copying also the superclass into a new generic class with a generated name. The class inheritance is copied until the point in the hierarchy where no superclass references a generic parameter. 5 IMPLEMENTATION The homogeneity of Pharo and in general of most Smalltalk dialects greatly eases the manipulation of a program structural elements such as classes and methods. In Smalltalk, classes and methods are first-class entities. They can be manipulated as any object. A compiled method is a set of bytecode instructions with an array of literals. This array contains all references to classes being used by this compiled method (Goldberg and Robson, 1983). Instantiating a generic is made by copying a class, assigning a different name, and adjusting the array of literals with a different set of class bindings. An example of this procedure is depicted in Figure 3. A number of design decisions were made: - The Pharo syntax has not been modified. This has the great advantage of not impacting the current development and source code management tools. This is possible since classes are first-class objects in Pharo. - The Smalltalk meta-object protocol has not been extended. Again, this decision was made to limit the impact on the development tools. As a consequence, there is no distinction between a generic and a class, thus the generic mechanism can be implemented as a simple library to load. Indeed these design choices are based also on past experience in Smalltalk extensions: the last significant change of the language was realized in 2004 (Lienhard, 2004), when traits have been introduced in Squeak, the predecessor of Pharo. In the current version of Pharo, the support of traits is fragile at best (bugs are remaining and many tools are not traits aware). This experience gained with traits suggests that realizing a major change in the programming language is challenging and extremely resource consuming. Note that, by using our reverse generics, one can modify only the original existing code (i.e., the classes that are not generic), and then, automatically, spread the modifications to the one obtained by reverse generics. The implementation presented in this paper is freely available (under the MIT license) at http://www.squeaksource.com/ReverseGeneric.html. 6 CASE STUDY: APPLICATION TO THE PHARO STREAM HIERARCHY The situation described in Section 4 is an excerpt of the case study we realized. For each of the 24 subclasses of WriteStream, we instantiated GStreamTest. This way, about 24 new unit tests were generated. The WriteStreamTest class defines 6 test methods. We therefore generated 24 * 6 = 144 test methods. Each of the generated test is a subclass of ClassTestCase, which itself defines 3 test methods. Running these 24 unit tests executes 144 + 27 * 3 = 225 test methods. Running these 225 test methods results in: 225 runs, 192 passed, 21 failures, 12 errors. Since the 6 tests in WriteStreamTest pass, this result essentially says that there are some functionalities that are verified for WriteStream, but they are not verified for some of its subclasses. An example of the culprit test methods for the failures areCrLfFileStreamTest>>testNew and LimitedWriteStreamTest>>testSetToEnd. The fact that these two tests fail uncovers some bugs in the classesCrLfFileStream and LimitedWriteStream. The body ofCrLfFileStreamTest>>testNew is self should: [CrLfFileStream new] raise: Error meaning that aCrLfFileStream should not be instantiated with new. However, the class can actually be instantiated with new, resulting in a meaningless and unusable object. Another example of a bug was found in Limited- Generics and Reverse Generics for Pharo WriteStream. This class is used to limit the amount of data to be written in a stream. The body of LimitedWriteStreamTest>> testSetToEnd is: ``` LimitedWriteStreamTest>> testSetToEnd | string stream | stream := LimitedWriteStream with: ". stream nextPutAll: string. self assert: stream position = string size. stream setToEnd. self assert: stream position = string size. self assert: stream contents = string. ``` It essentially verifies the behavior of the stream index cursor. This test signals an error in the expression stream nextPutAll: string. By inspecting what triggered the error, we discovered that when a LimitedWriteStream is instantiated with: ".", the object is initialized with a nil value as the limit, resulting in a meaningless comparison (newEnd > limit in the method LimitedWriteStream>> nextPutAll:). Not all the test methods that fail and raise an error are due to some bugs in the class stream hierarchy. We roughly estimate that only 11 test methods of these 33 methods have uncovered tangible bugs. The remaining failures and errors are due to some differences on how class should be initialized. For example, the test StandardFileStreamTest>> testSetToEnd raises an error because a StandardFileStream cannot be instantiated with the message: (it is instantiated with fileNamed:, which requires a file name as argument). Although no bug have been located, this erroneous test method suggests that the method write: should be canceled (i.e., raise an explicit error saying it should be not invoked). This experiment has a number of contributions: - It demonstrates the applicability of our generics and reverse generics to a non-trivial scenario, - It helped us identify a number of bugs and anomalies in the Pharo stream hierarchy. 7 RELATED WORK When Java generics were designed, one of the main intent was to have the backward compatibility with the existing Java collection classes. The enabling mechanism is that all the generic type parameters must be “erased” after the compilation (type erasure model (Odersky and Wadler, 1997; Bracha et al., 1998)). Therefore, all the run-time type information about parametrized types are completely lost after the compilation, thus making impossible to execute all the operations which require run-time types, such as, e.g., object instantiations. This limits the expressiveness of Java generics (Allen and Cartwright, 2002): for instance, if T is a generic type, the code T x = new T() is not valid. For these reasons, the generic type system of Java cannot be considered “first-class”, since generic types cannot appear in any context where standard types can appear (Allen et al., 2003). On the contrary, the generic programming mechanisms provided by C++ do not suffer from all these issues. In particular, the C++ compiler generates a different separate copy for each generic class instantiated with specific types (and the typechecking is performed on the instantiated code, not on the generic one). Therefore, while in Java a Collection<String> and a Collection<Integer> would basically refer to the same class (i.e., the type erased class Collection), in C++ they would refer to two separate classes, where all the type information remains available. Therefore, in C++, all the operations which require run-time types are still available in generic classes, and hence the C++ type generic system can be considered “first-class” (notably, C++ templates were formalized and proved type safe (Siek and Taha, 2006)). For instance, if T is a generic type, the code T x = new T() in C++ is perfectly legal, since C++ templates are similar to a macro expansion mechanism7. We refer to Ghosh (Ghosh, 2004) and Batov’s work (Batov, 2004) for a broader comparison between Java generics and C++ templates. In order for generic types to be used and type checked in a generic class, those types must be constrained with some type requirements. Constraints on generic types are often referred to as concepts (Kapur et al., 1981; Austern, 1998). Java generics require explicit constraints, thus a concept is defined using a Java interface or a base class, and a type satisfies a concept if it implements that specific interface or it extends that specific base class. On the contrary, the C++ compiler itself infers type constraints on templates and automatically checks whether they are satisfied when such generic type is instantiated. In our implementation, generic parameters can be assigned constraints using nominal (similarly to Java) and structural requirements (similarly to concepts), as illustrated in Section 3. In dynamically typed languages, like Smalltalk, where types are not used in declarations, the context where generics are useful is in object instantia- --- 7Actually, C++ templates are much more than that: (partial) specialization of templates is one of the main features that enables computation at compile time, often referred to as template metaprogramming (Abrahams and Gurtovoy, 2004). tion; thus, with this respect, the generics presented in this paper are related to C++ templates, rather than to Java generics. The generics needed in the context of Smalltalk act at a meta-level, by generating new classes starting from existing ones, thus, they have similarities with *generative programming* mechanisms (Eisenacker and Czarnecki, 2000) and C++ meta programming (Abrahams and Gurtovoy, 2004). This meta programming mechanism is evident also in our generics and reverse generics implementation in Pharo: new code is generated starting from existing one, without modifying the latter. This takes place in two steps: with reverse generics a brand new generic version is obtained starting from existing code; then, by instantiating generic classes, the generic code is adapted and reused in a new context. There seem to be similarities among reverse generics and some refactoring approaches: however, the intent of reverse generics is not to perform reverse engineering or refactoring of existing code, (see, e.g., (Duggan, 1999; Dincklage and Diwan, 2004; Kiezun et al., 2007)) but to extrapolate possible generic “template” code from existing one, and reuse it for generating new code. Note that this programming methodology will permit modifying only the original existing code, and then, automatically, spread the modifications to the one obtained by reverse generics. A first attempt to automatically extract generic class definitions from an existing library has been conveyed by Duggan (Duggan, 1999), well before the introduction of generics into Java. Besides the reverse engineering aspect, Duggan’s work diverges from reverse generics regarding downcast insertion and parameter instantiation. Duggan makes use of *dynamic subtype constraint* that inserts runtime downcasts. A parameterized type may be instantiated, which requires some type-checking rules for the creation of an object: the actual type arguments must satisfy the upper bounds of the formal type parameters in the class type. Kiezun et al. propose a type-constraints-based algorithm for converting non-generic libraries to add type parameters (Kiezun et al., 2007). It handles the full Java language and preserves backward compatibility. It is capable of inferring wildcard types and introducing type parameters for mutually-dependent classes. Reverse engineering approaches ensure that a library conversion preserves the original behavior of the legacy code. This is a natural intent since such a conversion is exploited as a refactoring. Instead, the purpose of reverse generics is to replace static types references contained in existing classes with specialized ones and then to produce a brand new class. A limitation of first-order parametric polymorphism is that it is not possible to abstract over a type constructor. For instance, in List<T>, List is a type constructor, since, given an argument for T, e.g., Integer, it builds a new type, i.e., List<Integer>. However, the type constructor List itself is not abstracted. Therefore, one cannot pass a type constructor as a type argument to another type constructor. Template template parameters (Weiss and Simonis, 2001) in C++ provides a means to abstract over type constructors. Moors, Piessens and Odersky (Moors et al., 2008) extended the Scala language (Odersky et al., 2008) with type construction polymorphism to allow type constructors as type parameters. Therefore, it is possible not only to abstract over a type, but also over a type constructor; for instance, a class can be parameterized over Container<T>, where Container is a type constructor which is itself abstracted and can be instantiated with the actual collection, e.g., List or Stack, which are type constructors themselves. The generics mechanism presented in this paper acts at the same level of first-order parametric polymorphism, thus, it shares the same limitations. An interesting extension would be to be able to switch to the higher level of type constructor polymorphism, but this is an issue that still needs to be investigated. The *Dependency Injection* pattern (Fowler, 2004) is used to “inject” actual implementation classes into a class hierarchy in a consistent way. This is useful when classes delegate specific functionalities to other classes: messages are simply forwarded to the object referenced in a field. These fields will have as type an interface (or a base class); then, these fields will be instantiated with derived classes implementing those interfaces. This way the actual behavior is abstracted, but we need to tackle the problem of “injecting” the actual implementation classes: we do not have the implementation classes’ names hardcoded in the code of the classes that will use them, but we need to initialize those classes somewhere. Moreover, we need to make sure that, if we switch the implementation classes, we will do that consistently throughout the code. Typically this can be done with *factory method* and *abstract factory* patterns (Gamma et al., 1995), but with *dependency injection frameworks* it is easier to keep the desired consistency, and the programmer needs to write less code. The reverse generics mechanism is not related to object composition and delegation, i.e., the typical context of the *inversion of control* philosophy that dependency injection tries to deal with. With reverse generics the programmer does not have to design classes according the pattern of abstracting the actual behavior and then delegate it to factory method. --- 8 The repetition of “template” is not a mistake. 9 Scala uses ] instead of <>. ods; on the contrary the reverse generics mechanism allows generating new code (i.e., new classes) from existing one, without modifying the original code. Package Template (Sørensen et al., 2010) is a mechanism for reusing and adapting packages by re-binding class references. A version has been proposed for Groovy (Axelsen and Krogdahl, 2009). Package Template offer sophisticated composition mechanisms, including class renaming and merging. The reverse generics mechanism is able to turn a non generic class into a generic one, while Package Template is not designed for this purpose. Traits (Ducasse et al., 2006) were introduced in the dynamically-typed class-based language Squeak/Smalltalk to counter the problems of class-based inheritance with respect to code reuse. Although both traits and generic programming aim at code reuse, their main contexts are different: traits provide reuse by sharing methods across classes (in a much more reusable way than standard class-based inheritance), while generic programming (and also our generics) provides a mechanism to abstract from the type implementing specific behavior. Combining our generic mechanism with traits looks promising in that respect, also for the meta-programming features of traits themselves (Reppy and Turon, 2007). 8 CONCLUSIONS The mechanisms presented in this paper provide features both to write generic code in a dynamically typed language and to extrapolate possible generic “template” code from existing one, and reuse it for generating new code. In our approach, class generalization and generic instantiation is based on class copying, similarly to C++ templates. Although this implies some code duplication in the generated code, this is consistent with the meta-level which is typical of generative programming mechanisms (Eisenecker and Czarnecki, 2000). Since highly parametrized software is harder to understand (Gamma et al., 1995), we may think of a programming methodology where a specific class is developed and tested in a non-generic way, and then it is available to the users via its “reversed” generic version (in this case, we really need the non generic version for testing purposes, so the code must not be refactored). Therefore, reverse generics can be used as a development methodology, not only as a way to turn previous classes into generic: one can develop, debug and test a class with all the types instantiated, and then expose to the “external world” the generic version created through reverse generics. A limitation of the implementation presented in this paper is that the generic parameters (like T in Section 2 and Section 4) are global subclasses, thus there can be only one such generic parameter (together with its requirements, Section 3 and Section 4.1). However, in this first prototype implementation of generics and reverse generics in Pharo, this did not prevent us from using these mechanisms to class hierarchies (like the case study of Section 6) and to study their applicability. Of course, in future versions, we will deal with this issue, and remove the “globality” of generic parameters. At the best of our knowledge, no generic (and reverse generic) programming language construct is available in Smalltalk, Ruby and Python that achieve the same capabilities as we presented in this paper. It is subject of future work to further investigate whether our proposal can be applied to other dynamically typed languages. REFERENCES
{"Source-Url": "http://www.scitepress.org/Papers/2012/40275/40275.pdf", "len_cl100k_base": 7364, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 34490, "total-output-tokens": 9522, "length": "2e12", "weborganizer": {"__label__adult": 0.00045680999755859375, "__label__art_design": 0.00023984909057617188, "__label__crime_law": 0.0003056526184082031, "__label__education_jobs": 0.0004856586456298828, "__label__entertainment": 4.70280647277832e-05, "__label__fashion_beauty": 0.0001741647720336914, "__label__finance_business": 0.00014293193817138672, "__label__food_dining": 0.0003795623779296875, "__label__games": 0.00033473968505859375, "__label__hardware": 0.00045418739318847656, "__label__health": 0.0004398822784423828, "__label__history": 0.0001885890960693359, "__label__home_hobbies": 7.408857345581055e-05, "__label__industrial": 0.0002796649932861328, "__label__literature": 0.00025272369384765625, "__label__politics": 0.0002570152282714844, "__label__religion": 0.0005035400390625, "__label__science_tech": 0.0022869110107421875, "__label__social_life": 9.92417335510254e-05, "__label__software": 0.0027027130126953125, "__label__software_dev": 0.98876953125, "__label__sports_fitness": 0.00032067298889160156, "__label__transportation": 0.00042176246643066406, "__label__travel": 0.0002281665802001953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39309, 0.01524]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39309, 0.46484]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39309, 0.86785]], "google_gemma-3-12b-it_contains_pii": [[0, 2951, false], [2951, 7823, null], [7823, 12160, null], [12160, 16903, null], [16903, 19526, null], [19526, 21252, null], [21252, 26295, null], [26295, 31907, null], [31907, 36828, null], [36828, 39309, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2951, true], [2951, 7823, null], [7823, 12160, null], [12160, 16903, null], [16903, 19526, null], [19526, 21252, null], [21252, 26295, null], [26295, 31907, null], [31907, 36828, null], [36828, 39309, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39309, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39309, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39309, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39309, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39309, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39309, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39309, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39309, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39309, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39309, null]], "pdf_page_numbers": [[0, 2951, 1], [2951, 7823, 2], [7823, 12160, 3], [12160, 16903, 4], [16903, 19526, 5], [19526, 21252, 6], [21252, 26295, 7], [26295, 31907, 8], [31907, 36828, 9], [36828, 39309, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39309, 0.01099]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
2fa2e6a01e472aa6bb7a74755cabf0cfd1125366
JICSIT2011 / ITAIC 2011 Keynote http://www.jicsit.org/ A Dream of Software Engineers -- Service Orientation and Cloud Computing Yinong Chen Arizona State University, Tempe, Arizona, U.S.A. Outline ➢ Introduction ▪ Programming Paradigms and Software Engineering ▪ Service Orientation vs. Object Orientation ➢ A Dream of Software Engineers: Service-Oriented Computing and Workflow-Based Software Development ➢ Cloud Computing, we could not even dream ▪ Cyber-Physical Device and Robot as a Service ▪ ASU Service Repository Programming Paradigms Granularity Level 1st Generation Software Engineering - Procedural Programming - Assembly, Algo, FORTRAN 2nd Generation Software Engineering - Object-Oriented Programming - C++, Java, C# - Functional Programming - e.g. LISP/Scheme - Logic Programming - e.g. Prolog 3rd Generation Software Engineering - Service-Oriented Programming - Java, C#, Workflow - Component-Based Programming - Distributed Objects - C++, Java, C# Time - 50s - 60s - 70s - 80s - 90s - 00s - 10s The First Generation Software Engineering ❖ The features of the first generation ▪ Waterfall model ▪ Structured programming and design ▪ Structured analysis ▪ Compilers and interpreters advancement ▪ Abstract data types ▪ Layered architecture ❖ From machine and assembly programming to high-level programming. Significant productivity gain. ❖ Main technologies include compilers and OS, software development models, and programming languages ❖ Programming languages are the key. The Second Generation Software Engineering - The features of the second generation - Object-oriented analysis, design, and programming - UML (Unified Modeling Language), Agile processes - Software architecture patterns and design patterns - CMM (Capability Maturity Model) and CMMI (CMM Integration) - Model checking - Modeling (such as object-oriented modeling) rather than programming is the key technology, and also classification and cataloguing (patterns) best software practices, and refinement of processes. Not just coding. - Further productivity gain due to availability of tools, techniques, and documentation. - Development process and techniques are the key. The Third Generation Software Engineering ❖ The features of the third generation ▪ Service-oriented computing (development + execution combination) ▪ Cloud computing and SaaS (Software as a Service) with applications ▪ SaaS: development + execution + automated runtime management, including resource (sharing) and security (privacy) management. It introduces many scientific research questions into software engineering, such as data mining, control theory, and statistics. ❖ Expect very rapid software customization and deployment. ❖ Platform is the key. Imperative Software Development Software Development - **Requirement analysis** - What is the need of users? e.g., sorting numbers - **Specification** - What is the formal requirement? - Input $x_i$, $i = 1, \ldots, n$; - Output: $x_i \leq x_j$, if $i \leq j$ - **Designs** - How to do it? - Algorithms: Bubble, Merge, … - **Implementation (coding)** - How to code it? - (Programming in C, C++, Java, C#) - Run it in a virtual environment, simulation in the last two pages. - **Testing / Evaluation** - Put it in the real environment, e.g., the grade book application. - **Deployment** Component-Based Software Development Bricks and Tiles imperative Component-based Object-Oriented and Service-Oriented Software Development 1. Requirement analysis 2. Problem decomposition 3. Services development 4. Services testing 5. Service repository 6. Object-oriented development 7. Class/Object library 8. Object testing 9. Application building 10. Application builder 11. Testing 12. Deployment Object-Oriented Software Development Organization X: Component library Organization Y: Component library Service-Oriented Software Development Application Organization X: Component library Organization Y: Component library Organization Z: Component library Service broker Auto-searchable Registration Standard Interface Found Outline - Introduction - A Dream of Software Engineering: Service-Oriented Computing and Workflow-Based Software Development - Cloud Computing, we could not even dream - Cyber-Physical Device and Robot as a Service - ASU Service Repository Clearer Tiered Architecture - Presentation Layer (GUI) - Application Processing Layer, with Low-Level Code - Service and Component Layer - Data Management Layer - Workflow Layer with High-Level Composition - Service and Component Layer - WF Activities with Low-Level Code - Data Management Layer Using Flowchart as Code Movie Service: http://www.ignyte.com/webservices/ignyte.whats.showing.webservice/moviefunctions.asmx Zip Code Service: http://www.webserviceex.net/uszip.asmx Flowchart and Workflow Code An Online Ordering Process User submits request Submitted Assigned Approved Rejected Ordered Completed approve reassign cancel Define the States of a Finite State Machine User submits request - Submitted - SubmittedInitialization - SubmittedFinalization - Assigned - AssignedInitialization - AssignedFinalization - Approved - Rejected - Reassign - Drop StateActivity, EventDrivenActivity, StateInitializationActivity or StateFinalizationActivity here - Ordered - Drop StateActivity, EventDrivenActivity, StateInitializationActivity or StateFinalizationActivity here - Completed - Drop StateActivity, EventDrivenActivity, StateInitializationActivity or StateFinalizationActivity here Define the Transitions between the States User submits request - Submitted - Assigned - Approved - Rejected - Ordered - Completed - Assigned - AssignedInitialization - OnAssigned - SubmittedFinalization - Approved - ApprovedInitialization - OnApproved - ApprovedFinalization - Rejected - RejectedInitialization - OnReassigned - OnCanceled - RejectedFinalization - Ordered - OrderedInitialization - OnOrderReceived - OrderedFinalization - Completed Flowchart of a Mortgage Application Site Executable Workflow Open the “Initial Screening” Flowchart Outline - Introduction - A Dream of Software Engineering: Service-Oriented Computing and Workflow-Based Software Development - Cloud Computing, we could not even dream - Cyber-Physical Device and Robot as a Service - ASU Service Repository The U.S. FEDERAL CLOUD COMPUTING STRATEGY FEDERAL CLOUD COMPUTING STRATEGY Vivek Kundra U.S. Chief Information Officer FEBRUARY 8, 2011 Cloud computing headed for $20B market Administration strategy calls for data center reduction to pay for plan By Kathleen Hickey • Feb 18, 2011 The market for cloud services is about to explode in the government space if Federal CIO Vivek Kundra has his way. His recently released Federal Cloud Computing Strategy calls for about a quarter of federal IT spending, or $20 billion, to be committed to cloud systems. Additionally, under the Cloud First program, agencies will be required to move three services to the cloud within 18 months, adopt a cloud model wherever feasible and evaluate cloud options before making investments. An estimated $20 billion of the federal government’s $80 billion in IT spending could be used for cloud computing, Kundra said in the report. The agencies expected to spend the most on cloud technology are the Homeland Security and Treasury departments, at approximately $2.4 billion apiece, followed by the Defense, Veterans Affairs and Transportation departments. The top contractors at those agencies include companies such as Hewlett-Packard, Computer Sciences Corp., IBM, and Lockheed Martin. Vivek Kundra’s “Cloud First” Policy • Government agencies have been asked to consider a cloud computing option first when they planned to launch a new IT project; and they are required to identify three systems they would like to move to the cloud. • Kundra believes Cloud Computing is the next “Internet” that has changed the world, not just computing! Essential Characteristics of Cloud Computing http://csrc.nist.gov/groups/SNS/cloud-computing/ - On-demand services, - Broad network access, - Resource pooling, - Rapid elasticity, and - Measured services - Minimal management effort Web 2.0, Web 3.0, and Cloud Computing Static WWW (Web 1.0) URI, HTML, HTTP Dynamic SOC-based Web 2.0 UDDI, WSDL, SOAP Semantics-based Web 3.0 RDF, RDFS, OWL Syntax Semantics Cloud Computing Components of Cloud Computing - Software as a Service - Platform as a Service - Infrastructure as a Service - X as a Service - Test as a Service - Cyber Physical Devices - Device as a Service - Robot as a Service X as a Service: What is X? - X is unknown - X is a variable - X is a dream - X is what we could not even dream - X is everything - Social networking: We can hide nothing - Ontology: Everything can be reasoned of - Virtual and reality **Everything is under the cloud!** Outline - Introduction - A Dream of Software Engineering: Service-Oriented Computing and Workflow-Based Software Development - Cloud Computing, we could not even dream - Cyber-Physical Device and Robot as a Service - ASU Service Repository As a Part of Cloud Computing Software as a Service Platform as a Service Infrastructure as a Service X as a Service Cyber Physical Devices Device as a Service Robot as a Service Service Interface in HTTP, URI, REST. WSDL, SOAP, etc. Current Efforts in Device Integration: Augmented Reality (1) - **Pachube** - Data *infrastructure* for users to build their Internet of Things - Manage real-time data from sensors, devices, and environments - **Wikitude** World Browser: - Organize and display information about users' surroundings in a mobile camera view. - Similar to Pachube, but focus on photos and videos Current Efforts: Device as a Service (2) - **Devices Profile for Web Services (DPWS)** defines implementation constraints to enable secure Web Service messaging, discovery, description, and eventing on resource-constrained devices; - DPWS specification was initially published in 2004 and was submitted for standardization to OASIS in 2008. DPWS 1.1 was approved as OASIS Standard together with WS-Discovery 1.1 and SOAP-over-UDP 1.1 2009; - Microsoft .Net Framework Class Library defined classes for supporting DPWS device programming Reference: http://en.wikipedia.org/wiki/Devices_Profile_for_Web_Services Current Efforts: Device as a Service (2) • Device with Built-in Service Interface, for example: • Netduino Plus: Works with .Net Micro Framework to facilitate service to device communication http://www.amazon.com Current Efforts: Robot as a Service (3) - ASU Implementation of Robot as a Service - Web service wraps the device drivers - Web Application access the Web services Video: http://vimeo.com/9740048 Join the Cloud and Develop RaaS Add Partner Add Service Add Application ASU Tsinghua University RaaS RaaS RaaS RaaS RaaS RaaS Outline - Programming Paradigms and Software Engineering - Service Orientation vs. Object Orientation - Service-Oriented Computing and Workflow-Based Software Development - Cloud Computing, we could not even dream - Cyber-Physical Device and Robot as a Service - ASU Service Repository http://venus.eas.asu.edu/WSRepository/repository.html Textbook Third Edition Service-Oriented Computing and Web Software Integration From Principles to Development Yinong Chen and Wei-Tek Tsai ASU Repository of Web Services and Web Applications ASU Service Repository http://venus.eas.asu.edu/WSRepository/repository.html - SOAP/WSDL Services - RESTful Services - Workflow services - Web applications - Robot as Service: <table> <thead> <tr> <th>Crypto service</th> <th>ASP.Net Encryption and decryption string(string)</th> </tr> </thead> <tbody> <tr> <td>Data caching</td> <td>Caching disk file contents in browser</td> </tr> <tr> <td>Dynamic graphics</td> <td>Vending machine, generate graphics without using user control</td> </tr> <tr> <td>Dynamic graphics</td> <td>Vending machine, generate graphics in user control</td> </tr> <tr> <td>Forms security</td> <td>Authentication and authorization application</td> </tr> <tr> <td>Image Verifier</td> <td>Application that tests the RESTful ImageVerifer service</td> </tr> <tr> <td>Image Verifier</td> <td>Application that tests the WSDL-SOAP ImageVerifer service</td> </tr> <tr> <td>Random String</td> <td>Application that tests the RandomString service</td> </tr> <tr> <td>Shopping cart</td> <td>Enter items to catalogue, add to cart, remove from cart</td> </tr> <tr> <td>XML file read write</td> <td>Save book information into XML file in server</td> </tr> <tr> <td>Service Type</td> <td>Description</td> </tr> <tr> <td>------------------------------</td> <td>-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------</td> </tr> <tr> <td>Crypto service in SVC</td> <td>WCF-based WSDL-SOAP service with two operations: string Encrypt(string); and string Decrypt(string);</td> </tr> <tr> <td>Image Verifier in RESTful</td> <td>WCF RESTful service with GetImage/3Nt$@ operation</td> </tr> <tr> <td>Image verifier in workflow</td> <td>Workflow-based service</td> </tr> <tr> <td>Messenger service</td> <td>WCF service with two operations: bool SendMessage(string Username, string Message); and string[] ReceiveMessage(string UserID);</td> </tr> <tr> <td>Mortgage Service in Workflow</td> <td>Microsoft MSDN Magazine mortgage service example in workflow:</td> </tr> <tr> <td></td> <td></td> </tr> <tr> <td></td> <td></td> </tr> <tr> <td></td> <td></td> </tr> <tr> <td>Number Guess in RESTful</td> <td>WCF RESTful service with two operations: int secretNumber(int lower, int upper); and string checkNumber(int userName, int secretNum);</td> </tr> <tr> <td></td> <td></td> </tr> <tr> <td></td> <td></td> </tr> <tr> <td></td> <td></td> </tr> <tr> <td>Document type</td> <td>Description</td> </tr> <tr> <td>-----------------------</td> <td>-----------------------------------------------------------------------------</td> </tr> <tr> <td>Document type</td> <td>Definition example</td> </tr> <tr> <td></td> <td><a href="http://venus.eas.asu.edu/WSRepository/xml/instructor.dtd">http://venus.eas.asu.edu/WSRepository/xml/instructor.dtd</a></td> </tr> <tr> <td>RDF file</td> <td>RDF schema definition file</td> </tr> <tr> <td></td> <td><a href="http://venus.eas.asu.edu/WSRepository/xml/Courses.rdf">http://venus.eas.asu.edu/WSRepository/xml/Courses.rdf</a></td> </tr> <tr> <td>Robot as a Service</td> <td>A Web application that accesses a Web service implemented in on cyber-</td> </tr> <tr> <td></td> <td>physical device, a Parallax Hex Crawler controlled by Atom</td> </tr> <tr> <td>Robot in simulation</td> <td>Simulated robot with laser sensor in a maze</td> </tr> <tr> <td>Smart home</td> <td>A smarthome using simulated cyber-physical devices</td> </tr> <tr> <td></td> <td><a href="http://venus.eas.asu.edu/WSRepository/SmartHome/Smarthome.html">http://venus.eas.asu.edu/WSRepository/SmartHome/Smarthome.html</a></td> </tr> <tr> <td>XML file</td> <td>Books stored in XML file</td> </tr> <tr> <td></td> <td><a href="http://venus.eas.asu.edu/WSRepository/xml/Courses.xml">http://venus.eas.asu.edu/WSRepository/xml/Courses.xml</a></td> </tr> <tr> <td>XML schema file</td> <td>Schema of the XML book file</td> </tr> <tr> <td></td> <td><a href="http://venus.eas.asu.edu/WSRepository/xml/Course.xsd">http://venus.eas.asu.edu/WSRepository/xml/Course.xsd</a></td> </tr> <tr> <td>XML style sheet</td> <td>Style sheet for the XML book file</td> </tr> <tr> <td></td> <td><a href="http://venus.eas.asu.edu/WSRepository/xml/Courses.xs">http://venus.eas.asu.edu/WSRepository/xml/Courses.xs</a></td> </tr> </tbody> </table> Where to Find the Information? Yinong Chen About 74,600 results (0.33 seconds) Yinong Chen and Yoshiaki Kakuda, Autonomous decentralised systems in web computing environment, Int. J. Critical Computer-Based Systems, Vol. 2, No. ... www.public.asu.edu/~ychen10/ - Cached - Similar ASU Directory Profile: Yinong Chen Yinong Chen received Ph.D. from the University of Karlsruhe, Germany, in ... https://webapp4.asu.edu/directory/person/328180 - Cached - Similar Yinong Chen - Ira A. Fulton Schools of Engineering Yinong Chen joined ASU in 2001. From 1994 to 2000, he was a lecturer and ... engineering.asu.edu/people/328180 - Cached Yinong Chen - Arizona State University -
{"Source-Url": "http://www.public.asu.edu/~ychen10/activities/jicsit11/ChenKeynote11.pdf", "len_cl100k_base": 4097, "olmocr-version": "0.1.53", "pdf-total-pages": 42, "total-fallback-pages": 0, "total-input-tokens": 56437, "total-output-tokens": 6381, "length": "2e12", "weborganizer": {"__label__adult": 0.0004684925079345703, "__label__art_design": 0.0005364418029785156, "__label__crime_law": 0.00033926963806152344, "__label__education_jobs": 0.00951385498046875, "__label__entertainment": 9.143352508544922e-05, "__label__fashion_beauty": 0.00022804737091064453, "__label__finance_business": 0.0005526542663574219, "__label__food_dining": 0.00037932395935058594, "__label__games": 0.0005011558532714844, "__label__hardware": 0.0012493133544921875, "__label__health": 0.0005655288696289062, "__label__history": 0.0003170967102050781, "__label__home_hobbies": 0.00016808509826660156, "__label__industrial": 0.0004503726959228515, "__label__literature": 0.00032138824462890625, "__label__politics": 0.0002105236053466797, "__label__religion": 0.0005435943603515625, "__label__science_tech": 0.0203704833984375, "__label__social_life": 0.0002472400665283203, "__label__software": 0.006500244140625, "__label__software_dev": 0.955078125, "__label__sports_fitness": 0.00032782554626464844, "__label__transportation": 0.000705718994140625, "__label__travel": 0.00025653839111328125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23316, 0.01609]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23316, 0.1033]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23316, 0.79174]], "google_gemma-3-12b-it_contains_pii": [[0, 191, false], [191, 534, null], [534, 1051, null], [1051, 1552, null], [1552, 2237, null], [2237, 2801, null], [2801, 3422, null], [3422, 3506, null], [3506, 3828, null], [3828, 3935, null], [3935, 4165, null], [4165, 4410, null], [4410, 4707, null], [4707, 4891, null], [4891, 5057, null], [5057, 5635, null], [5635, 6128, null], [6128, 6233, null], [6233, 6253, null], [6253, 6292, null], [6292, 6543, null], [6543, 6769, null], [6769, 7906, null], [7906, 8383, null], [8383, 8617, null], [8617, 8815, null], [8815, 9037, null], [9037, 9314, null], [9314, 9559, null], [9559, 9795, null], [9795, 10181, null], [10181, 10792, null], [10792, 11007, null], [11007, 11251, null], [11251, 11388, null], [11388, 11679, null], [11679, 11928, null], [11928, 12152, null], [12152, 14859, null], [14859, 20258, null], [20258, 22640, null], [22640, 23316, null]], "google_gemma-3-12b-it_is_public_document": [[0, 191, true], [191, 534, null], [534, 1051, null], [1051, 1552, null], [1552, 2237, null], [2237, 2801, null], [2801, 3422, null], [3422, 3506, null], [3506, 3828, null], [3828, 3935, null], [3935, 4165, null], [4165, 4410, null], [4410, 4707, null], [4707, 4891, null], [4891, 5057, null], [5057, 5635, null], [5635, 6128, null], [6128, 6233, null], [6233, 6253, null], [6253, 6292, null], [6292, 6543, null], [6543, 6769, null], [6769, 7906, null], [7906, 8383, null], [8383, 8617, null], [8617, 8815, null], [8815, 9037, null], [9037, 9314, null], [9314, 9559, null], [9559, 9795, null], [9795, 10181, null], [10181, 10792, null], [10792, 11007, null], [11007, 11251, null], [11251, 11388, null], [11388, 11679, null], [11679, 11928, null], [11928, 12152, null], [12152, 14859, null], [14859, 20258, null], [20258, 22640, null], [22640, 23316, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23316, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23316, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23316, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23316, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23316, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23316, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23316, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23316, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23316, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23316, null]], "pdf_page_numbers": [[0, 191, 1], [191, 534, 2], [534, 1051, 3], [1051, 1552, 4], [1552, 2237, 5], [2237, 2801, 6], [2801, 3422, 7], [3422, 3506, 8], [3506, 3828, 9], [3828, 3935, 10], [3935, 4165, 11], [4165, 4410, 12], [4410, 4707, 13], [4707, 4891, 14], [4891, 5057, 15], [5057, 5635, 16], [5635, 6128, 17], [6128, 6233, 18], [6233, 6253, 19], [6253, 6292, 20], [6292, 6543, 21], [6543, 6769, 22], [6769, 7906, 23], [7906, 8383, 24], [8383, 8617, 25], [8617, 8815, 26], [8815, 9037, 27], [9037, 9314, 28], [9314, 9559, 29], [9559, 9795, 30], [9795, 10181, 31], [10181, 10792, 32], [10792, 11007, 33], [11007, 11251, 34], [11251, 11388, 35], [11388, 11679, 36], [11679, 11928, 37], [11928, 12152, 38], [12152, 14859, 39], [14859, 20258, 40], [20258, 22640, 41], [22640, 23316, 42]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23316, 0.14286]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
c8e13e9ecef3ba613b2365a8e77fdeae9237f057
The workload of operational DBMSs is measured in tps, i.e., transactions per second - $\approx 10^{-10^3}$ for banking applications and flight reservations - Concurrency control provides concurrent access to data - It increases DBMS efficiency by - maximizing the number of transactions per second (throughput) - minimizing response time Elementary I/O operations - Elementary operations are - Read of a single data object $x$ - $r(x)$ - Write of a single data object $x$ - $w(x)$ - They may require reading from disk or writing to disk an entire page The scheduler - is a block of the concurrency control manager - is in charge of deciding if and when read/write requests can be satisfied - The absence of a scheduler may cause correctness problems - also called anomalies Lost update - The correct value is $x=4$ - The effect of transaction $T_2$ is lost because both transactions read the same initial value Dirty read Transaction $T_1$ - $r_1(x)$ - $x = x + 1$ - $w_1(x)$ Transaction $T_2$ - $r_2(x)$ - $x = x + 1$ - $w_2(x)$ $T_2$ reads the value of $X$ in an intermediate state which never becomes stable (permanent). Transaction $T_1$ reads $x$ twice. - $x$ has a different value each time. Inconsistent read Transaction $T_1$ - $r_1(x)$ - $x = 2$ - $w_1(x)$ Transaction $T_2$ - $r_2(x)$ - $x = 2$ - $w_2(x)$ $T_2$ reads the value of $X$ in an intermediate state which never becomes stable (permanent). Transaction $T_1$ reads $x$ twice. - $x$ has a different value each time. Ghost update (a) Transaction $T_1$ - $r_1(x)$ - $x = 400$ - $r_1(y)$ - $y = 300$ Transaction $T_2$ - $r_2(x)$ - $y = y - 100$ - $r_2(z)$ - $z = z + 100$ - $w_2(y)$ - $y = 200$ - $w_2(z)$ - $z = 400$ $T_1$ reads $x$ twice. - $x$ has a different value each time. The correct value is total = 400 + 200 + 400 = 1000. Ghost update (b) Transaction $T_1$ - read the salary of all employees in department $x$ and compute AVG salary - $r_1(y)$ - $y = 400$ Transaction $T_2$ - $r_1(z)$ - $z = z + 100$ - $w_2(x)$ - $x = x + 1$ - $w_2(y)$ - $y = 200$ - $w_2(z)$ - $z = 400$ The insert operation is the ghost update. - The data is not yet in the database before the insert. Politecnico di Torino **Database Management Systems** **Concurrency Control** **Schedule** - The transaction is a sequence of read and write operations characterized by the same TID (Transaction Identifier): \[ r_t(x), r_t(y), w_t(x), w_t(y) \] - The schedule is a sequence of read/write operations presented by concurrent transactions: \[ r_1(x), r_1(y), w_1(x), w_2(y), w_2(z) \] - Operations in the schedule appear in the arrival order of requests. **Scheduler** - Concurrency control accepts or rejects schedules to avoid anomalies. - The scheduler has to accept or reject operation execution without knowing the outcome of the transactions. - `abort/commit` **Commit projection** - Commit projection is a simplifying hypothesis: - The schedule only contains transactions performing `commit`. - The dirty read anomaly is not addressed. - This hypothesis will be removed later. **Serial schedule** - In a serial schedule, the actions of each transaction appear in sequence, without interleaved actions belonging to different transactions. - Example: \[ r_0(x), r_0(y), w_0(x), r_2(y), w_1(x), w_1(y) \] ** Serializable schedule** - An arbitrary schedule \( S_i \) (commit projection) is correct when it yields the same result as an arbitrary serial schedule \( S_j \) of the same transactions. - \( S_i \) is serializable. - \( S_i \) is equivalent to an arbitrary serial schedule of the same transactions. Equivalence between schedules - Different equivalence classes between two schedules - View equivalence - Conflict equivalence - 2 phase locking - Timestamp equivalence - Each equivalence class - detects a set of acceptable schedules - is characterized by a different complexity in detecting equivalence Definitions - reads-from - \( r_i(x) \) reads-from \( w_j(x) \) when - \( w_j(x) \) precedes \( r_i(x) \) and \( i \neq j \) - there is no other \( w_k(x) \) between them - final write - \( w_j(x) \) is a final write if it is the last write of \( x \) appearing in the schedule - Two schedules are view equivalent if they have - the same reads-from set - the same final write set View equivalence A schedule is view serializable if it is view equivalent to an arbitrary serial schedule of the same transactions Example - \( S_1 \) is view serializable because it is view equivalent to \( S_2 \) - \( S_1 = w_0(x) \ r_2(x) \ r_1(x) \ w_2(x) \ w_2(z) \) - \( S_2 = w_0(x) \ r_1(x) \ r_2(x) \ w_2(x) \ w_2(z) \) View serializable schedule - A schedule is view serializable if it is view equivalent to an arbitrary serial schedule of the same transactions - VSR: schedules which are view serializable - Example - \( S_1 \) is view serializable because it is view equivalent to \( S_2 \) - \( S_1 = w_0(x) \ r_2(x) \ w_2(x) \ w_1(x) \) - \( S_2 = w_0(x) \ r_2(x) \ w_2(x) \ w_2(z) \) Lost update anomaly Transaction \( T_1 \) - bot - \( r_1(x) \) - \( x = x + 1 \) - \( w_1(x) \) - commit Transaction \( T_2 \) - bot - \( r_2(x) \) - \( x = x + 1 \) - \( w_2(x) \) - commit Corresponding schedule - \( S = r_1(x) \ r_2(x) \ w_1(x) \ w_2(x) \) Lost update anomaly - Is this schedule serializable? - Only two possible serial schedules - \( S_1 = r_1(x) \ w_1(x) \ r_2(x) \ w_2(x) \) - \( S_2 = r_2(x) \ w_2(x) \ r_1(x) \ w_1(x) \) - \( S \) is not view equivalent to any serial schedule - not serializable - should be rejected Elena Baralis, Silvia Chiusano Politecnico di Torino **Inconsistent read anomaly** Transaction T₁ - bot - r₁(x) - r₁(x) commit Transaction T₂ - bot - r₂(x) - x=x+1 - w₂(x) commit **Ghost Update (a)** Transaction T₁ - bot - r₁(x) - r₁(y) - r₁(z) total = x + y + z - commit Transaction T₂ - bot - r₂(y) - y = y - 100 - r₂(z) - z = z + 100 - w₂(y) commit **Checking view serializability** Detecting view equivalence to a given schedule has linear complexity. Detecting view equivalence to an arbitrary serial schedule is NP complete. - Not feasible in real systems. - Less accurate but faster techniques should be considered. **Conflict equivalence** Conflicting actions: - Action Aᵢ is in conflict with action Aⱼ (i ≠ j) if both actions operate on the same object and at least one of them is a write. - Read-Write conflicts (RW or WR). - Write-Write conflicts (WW). Two schedules are conflict equivalent if: - They have the same conflict set. - Each conflict pair is in the same order in both schedules. Conflict serializable schedule - A schedule is **conflict serializable** if it is equivalent to an arbitrary serial schedule of the same transactions. - **CSR**: Schedules which are conflict serializable. Example \[ S = \text{w}_0(x) \text{r}_1(x) \text{w}_0(z) \text{r}_1(z) \text{r}_2(x) \text{r}_3(z) \text{w}_1(x) \] \[ S_S = \text{w}_0(x) \text{w}_0(z) \text{r}_2(x) \text{r}_1(x) \text{r}_1(z) \text{w}_1(x) \text{r}_3(z) \text{w}_3(z) \] Schedule \( S \) is conflict serializable. Detecting conflict serializability - To detect conflict serializability, it is possible to exploit the **conflict graph**. - **Conflict graph**: - A node for each transaction. - An edge \( T_i \rightarrow T_j \) if: - There exists at least a conflict between an action \( A_i \) in \( T_i \) and \( A_j \) in \( T_j \). - \( A_i \) precedes \( A_j \). - If the conflict graph is acyclic, the schedule is CSR. - Checking graph cyclicity is linear in the size of the graph. Real system settings - 100 tps (transactions per second) - Each transaction accesses \( \approx 10 \) pages. - Each transaction lasts \( \approx 5 \) s. - The conflict graph is characterized by 500 nodes. - 100 tps * 5 seconds. - Accesses to be checked for conflicts: - 500 nodes * 10 page accessed \( \approx 5000 \) accesses. - At each access: - The graph should be updated. - Cycle absence should be checked. VSR versus CRS - CSR schedules are a subset of VSR schedules This schedule is VSR but not CSR 2 Phase Locking - The read lock is shared among different transactions - The write lock is exclusive - It is not compatible with any other lock (R/W) on the same data - Lock escalation - Request of R-Lock followed by W-Lock on the same data Lock manager - The scheduler becomes a lock manager - It receives transaction requests and grants locks based on locks already granted to other transactions - When the lock request is granted - The corresponding resource is acquired by the requesting transaction - When the transaction performs unlock, the resource becomes again available - When the lock is not granted - The requesting transaction is put in a waiting state - Wait terminates when the resource is unlocked and becomes available - The lock manager exploits - The information in the lock table to decide if a given lock can be granted to a transaction - The conflict table to manage lock conflicts Conflict table <table> <thead> <tr> <th>Request</th> <th>Resource State</th> <th>R-Locked</th> <th>W-Locked</th> </tr> </thead> <tbody> <tr> <td>Free</td> <td></td> <td></td> <td></td> </tr> <tr> <td>R-Lock</td> <td></td> <td></td> <td></td> </tr> <tr> <td>W-Lock</td> <td></td> <td></td> <td></td> </tr> <tr> <td>Unlock</td> <td></td> <td></td> <td></td> </tr> </tbody> </table> Lock manager - The lock manager exploits the information in the lock table to decide if a given lock can be granted to a transaction stored in main memory for each data object. - 2 bits to represent the 3 possible object states (free, r_locked, w_locked). - A counter to count the number of waiting transactions. Read locks - Read locks are shared. - Other transactions may lock the same resource. - A counter is used to count the number of transactions currently holding the R-Lock. - Free when count = 0. 2 Phase Locking - Exploited by most commercial DBMS. - It is characterized by two phases: - Growing phase: needed locks are acquired. - Shrinking phase: all locks are released. 2 Phase Locking guarantees serializability. A transaction cannot acquire a new lock after having released any lock. This schedule is not accepted by 2PL but it is serializable. Example \[ S = r_1(x) \ w_1(x) \ r_2(x) \ w_1(x) \ r_3(y) \ w_1(y) \] - \( T_1 \) releases the lock on \( x \) - \( T_1 \) should acquire a new lock on \( y \) \( \square \) The schedule is CSR but not 2PL Ghost update (a) Transactions \[ \begin{align*} T_1 & & r_1(x) & & w_1(x) & & r_2(x) & & w_1(x) & & r_3(y) & & w_1(y) \\ T_2 & & r_1(x) & & w_1(x) & & r_2(x) & & w_2(x) & & r_3(y) & & w_1(y) \end{align*} \] - \( T_1 \) commits - \( T_1 \) unlocks \( y \) - \( T_2 \) waits - \( T_2 \) reads \( x \) - \( T_2 \) writes \( x \) - \( T_2 \) reads \( y \) - \( T_2 \) commits - \( T_2 \) unlocks \( y \) - \( T_2 \) unlocks \( z \) Resources \[ \begin{align*} T_1 & & T_2 & & T_3 \\ x & & y & & z \end{align*} \] - 1: read - 2: read - 1,2: read - 1,2: read - 2: write - 2: write - 2: write - 2: write - 2: write - free - free - free - free - free - free Strict 2 Phase Locking - **Strict** 2 Phase Locking allows dropping the commit projection hypothesis - A transaction locks may be released only at the end of the transaction - After COMMIT/ROLLBACK - After the end of the transaction, data is stable - It avoids the dirty read anomaly Lock Manager service interface - **Primitives** - R-Lock \((T, x, ErrorCode, TimeOut)\) - W-Lock \((T, x, ErrorCode, TimeOut)\) - UnLock \((T, x)\) - **Parameters** - \( T \): Transaction ID of the requesting transaction - \( x \): requested resource - \( ErrorCode \): return parameter - Ok - Not Ok (request not satisfied) - \( TimeOut \) - Maximum time for which the transaction is willing to wait Techniques to manage locking - A transaction requests a resource \( x \) - If the request can be satisfied - The lock manager modifies the state of resource \( x \) in its internal tables - It returns control to the requesting transaction - The processing delay is very small Elena Baralis, Silvia Chiusano Politecnico di Torino Techniques to manage locking - If the request cannot be satisfied immediately: - The requesting transaction is inserted in a waiting queue and suspended. - When the resource becomes available, the first transaction (process) in the waiting queue is resumed and is granted the lock on the resource. - Probability of a conflict $\approx \frac{(K \times M)}{N}$ - $K$ is the number of active transactions. - $M$ is the average number of objects accessed by a transaction. - $N$ is the number of objects in the database. When a timeout expires while a transaction is still waiting, the lock manager: - Extracts the waiting transaction from the queue. - Resumes it. - Returns a not ok error code. The requesting transaction may: - Perform rollback (and possibly restart). - Request again the same lock after some time. - Without releasing locks on other acquired resources. Hierarchical Locking Table locks can be acquired at different granularity levels: - Table - Group of tuples (fragment) - Physical partitioning criteria (e.g., data page). - Logical partitioning criteria (e.g., tuples satisfying a given property). - Single tuple - Single field in a tuple. Hierarchical locking is an extension of traditional locking: - It allows a transaction to request a lock at the appropriate level of the hierarchy. - It is characterized by a larger set of locking primitives. Locking primitives - **Shared Lock (SL)** - **Exclusive Lock (XL)** - **Intention of Shared Lock (ISL)** - It shows the intention of shared locking on an object which is in a lower node in the hierarchy - i.e., a descendant of the current node - **Intention of Exclusive Lock (IXL)** - Analogous to ISL, but for exclusive lock Request protocol 1. Locks are always requested starting from the tree root and going down the tree 2. Locks are released starting from the blocked node of smaller granularity and going up the tree 3. To request a SL or an ISL on a given node, a transaction must own an ISL (or IXL) on its parent node in the tree 4. To request an XL, IXL or SIXL on a given node, a transaction must own an IXL or SIXL on its parent node in the tree Compatibility matrix <table> <thead> <tr> <th>Request</th> <th>ISL</th> <th>IXL</th> <th>SL</th> <th>SIXL</th> <th>XL</th> </tr> </thead> <tbody> <tr> <td>ISL</td> <td>Ok</td> <td>Ok</td> <td>Ok</td> <td>Ok</td> <td>No</td> </tr> <tr> <td>IXL</td> <td>Ok</td> <td>Ok</td> <td>No</td> <td>No</td> <td>No</td> </tr> <tr> <td>SL</td> <td>Ok</td> <td>No</td> <td>Ok</td> <td>No</td> <td>No</td> </tr> <tr> <td>SIXL</td> <td>Ok</td> <td>No</td> <td>No</td> <td>No</td> <td>No</td> </tr> <tr> <td>XL</td> <td>No</td> <td>No</td> <td>No</td> <td>No</td> <td>No</td> </tr> </tbody> </table> Precedence graph for locks - XL → SIXL - SL → SIXL - ISL Selection of lock granularity - It depends on the application type - if it performs localized reads or updates of few objects - low levels in the hierarchy (detailed granularity) - if it performs massive reads or updates - high levels in the hierarchy (rough granularity) - Effect of lock granularity - if it is too coarse, it reduces concurrency - high likeliness of conflicts - if it is too fine, it forces a significant overhead on the lock manager Predicate locking - It addresses the ghost update of type b (insert) anomaly - for 2PL a read operation is not in conflict with the insert of a new tuple - the new tuple can't be locked in advance - **Predicate locking** allows locking all data satisfying a given predicate - implemented in real systems by locking indices Locking in SQL2 standard - Transaction types - read-write (default case) - read only - no data or schema modifications are allowed - shared locks are enough - The isolation level of a transaction specifies how it interacts with the other executing transactions - it may be set by means of SQL statements Isolation levels - **SERIALIZABLE** - the highest isolation level - it includes predicate locking - REPEATABLE READ - strict 2PL without predicate locking - reads of existing objects can be correctly repeated - no protection against ghost update (b) anomaly - the computation of aggregate functions cannot be repeated - **READ COMMITTED** - not 2PL - the read lock is released as soon as the object is read - reading intermediate states of a transaction is avoided - dirty reads are avoided - **READ UNCOMMITTED** - not 2PL - data is read without acquiring the lock - dirty reads are allowed - only allowed for read only transactions The isolation level of a transaction may be set by means of the statement ``` SET TRANSACTION [ISOLATION LEVEL <IsolationLevel>] [READ ONLY] [READ WRITE] ``` Write operations are always executed under strict 2PL with exclusive lock Deadlock Typical situation for concurrent systems managed by means of locking waiting conditions. Solving deadlocks - **Timeout** - the transaction waits for a given time - after the expiration of the timeout - it receives a negative answer and it performs rollback - Typically adopted in commercial DBMS - Length of the timeout interval - long: long waiting before solving the deadlock - short: overkill, which overloads the system Deadlock prevention - **Pessimistic 2PL** - All needed locks are acquired before the transaction starts - not always feasible - **Timestamp** - only "younger" (or older) transactions are allowed to wait - it may cause overkill Deadlock detection - Based on the *wait graph* - nodes are transactions - an edge represents a waiting state between two transactions - A cycle in the graph represents a deadlock - Expensive to build and maintain - used in distributed DBMS Elena Baralis, Silvia Chiusano Politecnico di Torino
{"Source-Url": "http://dbdmg.polito.it/twiki/pub/Public/SistemiDiGestioneDiBasiDati/6-ConcurrencyControl-x6.pdf", "len_cl100k_base": 5216, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 34261, "total-output-tokens": 5889, "length": "2e12", "weborganizer": {"__label__adult": 0.0003285408020019531, "__label__art_design": 0.0002065896987915039, "__label__crime_law": 0.0004763603210449219, "__label__education_jobs": 0.0007805824279785156, "__label__entertainment": 5.739927291870117e-05, "__label__fashion_beauty": 0.0001436471939086914, "__label__finance_business": 0.000537872314453125, "__label__food_dining": 0.00028777122497558594, "__label__games": 0.0007486343383789062, "__label__hardware": 0.00240325927734375, "__label__health": 0.0006060600280761719, "__label__history": 0.00021564960479736328, "__label__home_hobbies": 0.00011968612670898438, "__label__industrial": 0.000652313232421875, "__label__literature": 0.00015354156494140625, "__label__politics": 0.00019109249114990232, "__label__religion": 0.0003943443298339844, "__label__science_tech": 0.07562255859375, "__label__social_life": 6.473064422607422e-05, "__label__software": 0.0259246826171875, "__label__software_dev": 0.88916015625, "__label__sports_fitness": 0.00030422210693359375, "__label__transportation": 0.0004901885986328125, "__label__travel": 0.00017189979553222656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 17730, 0.02704]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 17730, 0.46103]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 17730, 0.80632]], "google_gemma-3-12b-it_contains_pii": [[0, 938, false], [938, 2215, null], [2215, 3621, null], [3621, 5657, null], [5657, 6616, null], [6616, 8015, null], [8015, 9059, null], [9059, 10261, null], [10261, 12185, null], [12185, 13573, null], [13573, 14710, null], [14710, 16742, null], [16742, 17730, null]], "google_gemma-3-12b-it_is_public_document": [[0, 938, true], [938, 2215, null], [2215, 3621, null], [3621, 5657, null], [5657, 6616, null], [6616, 8015, null], [8015, 9059, null], [9059, 10261, null], [10261, 12185, null], [12185, 13573, null], [13573, 14710, null], [14710, 16742, null], [16742, 17730, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 17730, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 17730, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 17730, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 17730, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 17730, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 17730, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 17730, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 17730, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 17730, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 17730, null]], "pdf_page_numbers": [[0, 938, 1], [938, 2215, 2], [2215, 3621, 3], [3621, 5657, 4], [5657, 6616, 5], [6616, 8015, 6], [8015, 9059, 7], [9059, 10261, 8], [10261, 12185, 9], [12185, 13573, 10], [13573, 14710, 11], [14710, 16742, 12], [16742, 17730, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 17730, 0.02778]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
84e4024ba1284286e31e71376aa270521e5d2888
UI/UX Design Of Mobile-Based Pharmacy Application Using Design Thinking Method Suryani1)*, Nurdiansah3), Faizal3), Nirwana6), Andrew Ridow Johanis5), Marsa6), Arkjun Yudistira Pratama7) 1)2)3)4)5)6)7) Universitas Dipa Makassar, Indonesia 1)*suryani187@undipa.ac.id, 2)nurdiansah@undipa.ac.id, 3)F41241@undipa.ac.id, 4)nirwana@undipa.ac.id, 5)andrew@undipa.ac.id, 6)marshaarie@undipa.ac.id, 7)nebo.arkjunior.yudistira@gmail.com ABSTRACT Pharmacy is a public facility that supplies, distributes and serves medicine needs. Based on observations, Rania Farma Pharmacy is one of the pharmacies that still uses conventional methods of medicine management. The Pharmacist Assistant has to write down medicine stocks and transactions in a book, calculate sales with the help of a calculator and monthly reporting by inputting daily sales in the Microsoft Office Excel application which takes quite a long time. This process is very prone to human error, such as calculation and recording errors, which can harm pharmacies and consumers. This research aims to design a mobile-based pharmacy application based on the User Interface (UI) and User Experience (UX) using the Design Thinking method. Then perform prototype analysis using the System Usability Scale (SUS). Design Thinking includes software development methods that focus on finding solutions to human-centered problems. Pharmacy information needs are collected through observation, interviews and literature study. In designing this information system, it is hoped that it can help medicine management at the Rania Farma Pharmacy. Keywords: UI, UX, Design Thinking, System Usability Scale, Pharmacy. 1. INTRODUCTION According to the Regulation of the Minister of Health of the Republic of Indonesia No.9 of 2017, pharmacies aim to serve the general public's health and provide quality pharmaceutical services. Thus, the drug distribution and transaction process vary greatly every day (MENTERI KESEHATAN REPUBLIK INDONESIA, 2017). As is the case at the Rania Farma Pharmacy located in Makassar, which plays a role in procuring, receiving, storing, recording and reporting drug preparations. These activities are carried out manually and have not utilized technology such as website systems or mobile applications. The Pharmacist Assistant has to write down drug stocks and transactions in a book, calculate sales with the help of a calculator and do monthly reporting by inputting daily sales in Microsoft Excel, which takes a long time. This manual process is prone to human error, such as calculation and recording errors, which can be detrimental to the pharmacy and the consumer. Based on that problem, innovation is needed in managing pharmaceutical activities at the Rania Farma Pharmacy, including creating a mobile-based drug management system. Therefore, the researchers created "UI/UX Design for Mobile-Based Pharmacy Applications using the Design Thinking Method" to facilitate work, streamline time and avoid human error within the scope of Rania Farma Pharmacy. User interface design is one of the factors that determine the number of visitors and users of a system or application. The User Interface requires a Usability Test to check how efficient and effective the User Interface System or application is (Auliaddina et al., 2021). The design emphasizes logical solutions regarding how the system meets requirements (Tangkowit et al., 2021). So, in general, design is designing the appearance of the system/application that is useful for meeting user needs and helping developers see an overview of the system/application to be made. Application is software that is embedded into a computer that has various commands to be able to carry out forms of work according to instructions carried out by the user (Hasriani et al., 2023). Applications can be interpreted as ready-to-use programs that are designed to carry out a function for other users or applications and can be used by the intended target (Ramadhan et al., 2021). Mobile applications are the most widely used technology among the three platforms: desktop, web and mobile (Yusril et al., 2021). * Corresponding author This is an Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0). The Design Thinking method is an innovation-based software product design method based on finding solutions to certain problems. Problem-solving methods that have not been clearly defined are carried out by understanding the needs of users who will be involved in using the application, gathering lots of ideas in brainstorming sessions and taking a direct approach through prototyping and direct testing (Susanti et al., 2019). Design Thinking is a human-centred approach, more suitable as a frame of mind for solving problems humans face while creating products or services as a solution (Herawan, 2019). This method has five stages which, as a whole, concentrate on finding problems based on human-centred. After the problem is found, focus on needs and find solutions to the problems encountered. The last process is testing the design to the user to get an overview of possible deficiencies or errors. In designing the appearance of the application, there is much software that can be used. In this study, the application used is Figma which provides prototype features for the testing process. Figma is a popular designing software used to design the appearance of mobile applications, desktops or websites. Figma's strengths can be seen in its ability to work on displays even though it consists of several designers (Muhyidin et al., 2020). Therefore, Figma is widely implemented in making application views that are fast and effective. There are several previous studies related to the research conducted, namely research conducted by (Rohili & Budi, 2022), in this study, a web-based drug sales information system was built to provide convenience in operational activities at the Khodijah Pharmacy. The method used in making the drug sales information system is a prototype using the CodeIgniter Framework and Bootstrap. Another research is (Krismonika et al., 2021), in this research, a drug inventory system program was created, so that drug inventory management is more accurate and orderly and makes it easier to search drug stock data. The research method is the waterfall method, while system testing uses black-box testing. Related research was carried out by developing a Hospital Management Information System (SIMRS) in a hospital (Paramadani et al., 2020). UI and UX development so that the SIMRS used is in accordance with business processes, is easier for users to use, and has a more attractive appearance. UI development in this study used the User Centered Design (UCD) method, the UI design results were tested with SUS to get 78 results which can be categorized as good. As for testing the user experience, the results are good and above average. Research on the development of user interfaces and user experience using the design thinking method was carried out by (Herfandi et al., 2022). Metode design thinking terbukti mampu memberikan solusi dalam melakukan design user interface berdasarkan pendekatan (user experience). The design thinking method is proven to provide solutions in designing user interfaces based on an approach (user experience). This study applied the design thinking method to analyze and develop the user interface and user experience on the BPR Sumbawa website. The results of the UI/UX website development are in the form of user empathy maps, user personas, user interface designs based on defined stages, wireframes and responsive prototypes made using Figma. Another related research that carries out interface and user experience development is (Yohanes et al., 2018), developing interfaces and user experience using the Goal-Directed Design method to determine the needs and goals of using the application. The initial evaluation of the website uses the WEBUSE (Website Usability Evaluation) questionnaire. Development is carried out with the Research, Modeling, Requirements, Framework, and Improvement phases. The research results are design recommendations with increased value criteria: Content, Organization and Readability 0.19, Navigation and Links 0.14, User Interface Design 0.18, Performance and Effectiveness 0.09, and an average of all indicators of 0.75 then it is included in the rating scale good. Another related research was carried out by (Albert et al., 2021), redesigning the UI/UX website with a better information structure at PT Interbat, one of Indonesia's largest pharmaceutical companies. The website was deemed unattractive from a visual standpoint, so a company website was developed or redesigned to distribute information about the company widely and make it an attractive internet-based media promotion tool. Based on several previous studies, the research carried out applied aspects of the user interface and user experience in application design using the Design Thinking method and evaluating prototypes with SUS analysis. Previous research has been carried out, namely analyzing the UI/UX in pharmacy website design using the HCD and SUS methods to get an average score of 77.6 which is included in the Good category. This study is expected to provide solutions by developing based on user needs and creating displays that prioritize or focus on users. 2. METHOD The research method used in this study can be seen in the following flow chart in Figure 1: ![Flow Chart](image-url) As shown in Figure 1 above, the research began with problem identification, followed by data collection. The data collection method used in this study are field research and literature research. Field research includes observation activities, namely direct observation and recording related to the procurement, reception, distribution and reporting of pharmaceuticals at Rania Farma Pharmacy. In addition to conducting interviews with the Pharmacist Assistant regarding data on drug supplies, suppliers and constraints faced at the Rania Farma Pharmacy, valid data is obtained. Literature Research is collecting data and information from electronic books and journals as a reference to support the research process. The Design Thinking method is a human-centred or human-centric design approach to solving problems and presenting new innovations. The use of the Design Thinking method is expected to be able to meet user needs and be able to solve user problems when using the application (Shirvanadi & Idris, 2021). The stages of the method namely: 1. Empathize, This stage is the main reference in human-centred design and tries to understand the user in its designed context (Candra Wardana & Gusti Lanang Putra Eka Prismana, 2022) In this stage, the author conducts direct interviews and observes the needs at the Rania Farma Pharmacy. 2. Define is the phase of finding the point of view of the core of the problem (Febriansari et al., 2022). In this stage, the authors define and conclude the problem and list the needs at Rania Farma Pharmacy. 3. Ideate is to focus on finding ideas/solutions to the conclusions that have been made from the previous stage (Wijaya et al., 2022). In this stage, the author focuses on finding ideas/solutions from the prior stage to serve as the basis for prototyping. 4. The prototype can be interpreted as an initial product used to test existing design ideas and as an example for the final product that will be released later (Azmi et al., 2019). In this stage, the author designs the pharmacy application based on the user interface/user experience. 5. The test is a trial technique for evaluating prototypes to get user input regarding the feasibility of the design for use and fixing problems that arise (Herfandi et al., 2022). In this stage, the author collects feedback from Rania Pharmacy to improve the design that has been made. While the analytical method to assist in the evaluation of application testing is the SUS method. SUS is an analytical method to assist in evaluating the usability of a user-oriented system/prototype. This method has 10 * Corresponding author questions with five answer scales. There are rules for calculating the average number of scores obtained (Damayanti et al., 2022): 1. From 10 questions, the user's score is reduced by one for odd questions. 2. From 10 questions, the final score is obtained from the value of five minus the score given by the user. 3. The average score on the sum of all numbers is multiplied by 2.5. Interpretation On SUS Value can be seen in table 1 below: <table> <thead> <tr> <th>SUS Score</th> <th>Grade</th> <th>Adjective Rating</th> </tr> </thead> <tbody> <tr> <td>&gt;80.3</td> <td>A</td> <td>Excellent</td> </tr> <tr> <td>68.1 – 80.3</td> <td>B</td> <td>Good</td> </tr> <tr> <td>68</td> <td>C</td> <td>Okay</td> </tr> <tr> <td>51 – 67.9</td> <td>D</td> <td>Poor</td> </tr> <tr> <td>≤51</td> <td>E</td> <td>Awful</td> </tr> </tbody> </table> As seen in table 1 above, SUS scores can be interpreted using a Grades and Adjectives Rating approach. For grades, raw SUS scores can be grouped into A to E, where A means excellent and E means very awful. Raw SUS scores can also be compared with one of six existing adjectives. SUS scores above 80.3 are considered Excellent, values from 68.1 to 80.3 fall into the Good category, 68 is Okay, values from 51 to 67.9 are Poor, and less than 51 is Awful. 3. RESULT In this section, the researcher will explain the results of the research obtained. Researchers can also use images, tables, and curves to explain the results of the study. These results should present the raw data or the results after applying the techniques outlined in the methods section. The results are simply results; they do not conclude. The UI/UX design process begins with implementing each stage in the design thinking approach following established procedures (Haryuda Putra et al., 2021). This method enables problem analysis and idea discovery to reach user-oriented solutions. The results of this study are in the form of a pharmacy application design prototype that has gone through development, evaluation and improvement that supports the appearance and usability of the application. 1. **Empathize**, This stage is the first step in the design thinking approach to find out the problems and needs of users through observation and interviews from three respondents (Owner, Pharmacist and Assistant Pharmacist). The observation results found that the system running at the Rania Farma pharmacy was still conventional, where all drug management activities relied on manual recording, which was prone to human error. Drug management activities in question are selling and purchasing drugs, recording drug stocks, ordering and invoicing, and monthly reporting. Meanwhile, based on interviews, respondents claimed that the manual system that was run was inefficient, draining a lot of energy and extra precision. The following is a list of problems in conventional drug management at the Rania Farma pharmacy: <table> <thead> <tr> <th>No.</th> <th>Problems</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>Inequality of daily sales results by recording in the manual book.</td> </tr> <tr> <td>2</td> <td>It is difficult to update drug stocks every day.</td> </tr> <tr> <td>3</td> <td>The recording of drug expiration dates is still manual, so errors can occur.</td> </tr> <tr> <td>4</td> <td>Respondents found it difficult to process invoices, as manual price calculations took a lot of time.</td> </tr> <tr> <td>5</td> <td>Respondents sometimes have difficulty with the location of the drug.</td> </tr> <tr> <td>6</td> <td>Supplier bills do not match the due date.</td> </tr> <tr> <td>7</td> <td>Monthly sales and purchase reports based on the manual.</td> </tr> </tbody> </table> 2. **Define**, This stage is compiling and unifying information to analyze user needs. Because the current system is conventional, the pharmacy requires a mobile-based automated system that makes it easy for users. The results of the requirements obtained in the define stage can be seen as follows: <table> <thead> <tr> <th>No.</th> <th>Needs</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>Cashier system that manages transactions, stock and location of drugs.</td> </tr> <tr> <td>2</td> <td>A system that allows marking the expiration date</td> </tr> </tbody> </table> * Corresponding author 3. **Ideate**, This stage aims to develop ideas to overcome problems and meet user needs. The main vulnerability problem is human error, so the Rania Farma pharmacy needs a mobile application designed based on real-time UI/UX. The ideate results are as follows: <table> <thead> <tr> <th>No.</th> <th>Solutions</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>Application with a cashier feature that manages transactions.</td> </tr> <tr> <td>2</td> <td>Applications with drug stock management features.</td> </tr> <tr> <td>3</td> <td>Applications that allow marking drug expiration dates.</td> </tr> <tr> <td>4</td> <td>Applications with electronic invoicing features and it is able to calculate drug prices automatically.</td> </tr> <tr> <td>5</td> <td>Applications that contain notes regarding data and drug location.</td> </tr> <tr> <td>6</td> <td>Applications with the Reminder feature for supplier payment due dates.</td> </tr> <tr> <td>7</td> <td>Applications with reports on sales, purchases and pharmacy monthly profits.</td> </tr> </tbody> </table> A Use-case diagram system can be seen in the following Figure 2: ![Use-case Diagram System](image-url) Based on the use case diagram in figure 2 above, the system is designed with two access levels, Admin and User. The Admin is the pharmacy owner who has full access rights to the system, including adding or deleting users if there are changes in pharmacy staff over time. The pharmacy owner must first register by inputting personal data, including username and password. They also need to input pharmacy data, including the pharmacy’s name and address, and add pharmacist data, along with assistant pharmacists to whom the pharmacy owner gives access rights to the pharmacy system or application. After registering, actors can log in and enter the system. The available menus that actors can access include a drug stock menu (actors can add and search for drugs), a transaction menu (actors can add, delete, and view transaction details), a supplier menu (actors can add and view invoice details), and a report menu (actors can see detailed sales reports and monthly drug purchases at pharmacies). For user level as personal data including username and password inputted by Admin, they can log in. Users can check drug stock availability by searching for drugs of interest to customers in the drug stock menu. In addition, Users can access transaction menus, adding transactions according to drug transactions at pharmacies. 4. **The Prototype**, This stage executes the idea into a simulated display, the same as when the application is used. At this stage, the researcher uses Figma Digital Prototyping as a reference for the application simulation design process. The UI/UX design of the pharmacy that was built using Figma software with a design that refers to user needs and problem-solving at Rania Farma Pharmacy can be seen in the following figure 3 to 8 below: ![Figure 3. Login and Register](image) As seen in figure 3 above, figma prototype has the Login form. To be able to access and use other design features, users must first register or login (if they already have an account). Then the user can enter the name, address and other employees in the pharmacy. Based on previous interviews, the pharmacy consists of Owner, Pharmacist and Assistant Pharmacist. So that one pharmacy can have more than one user. As seen in figure 4 above, the design of the home screen and the top view shows the username and pharmacy, and notification and setting icons are located on the right for easy access. Other views also allow users quick access to important features and sales history. Researchers also added access to Reminders of upcoming payments and last processed transactions. Based on interviews, this display has met the needs of users at Rania Farma Pharmacy. As seen in figure 5 above, The Stock display presents all the drugs that have been inputted and arranged alphabetically. The researcher added a filter to make users easier to find drugs with certain criteria based on the problems faced by respondents, such as the lack of time efficiency in checking out-of-stock drugs, and there was no estimate of the most sought-after drugs by pharmacy visitors. Thus, the feature is relevant for solving user-oriented problems. As seen in figure 6 above, the transaction display is arranged by date, which contains details of all sales on that date. Researchers added a transaction search feature based on date, making it easier for users to find the transaction they want. As seen in figure 7 above, the supplier contains invoices from related PBFs arranged by invoice entry date. In this invoice, researchers offer an automatic drug price calculator feature that aims to solve human error, sometimes resulting in drug price errors or expired date discrepancies. As seen in figure 8 above, the report contains a list of sales and purchases made within a month, and profits are calculated automatically. Researchers also added details for sales and purchases that allow pharmacy owners to access monthly reports more accurately and time efficiently. 5. The Test, At this stage, a test was carried out to test the prototype to the user then the response to the prototype testing was carried out through a questionnaire to get an overview of the user experience. The questionnaire was given online and distributed via a link containing 10 questions according to the provisions of the SUS analysis. In the case study at the Rania Farma pharmacy, researchers took a total sample of 6 pharmacists who filled out a survey on a scale of 1-5. Display testing was carried out based on a questionnaire following the SUS analysis involving 6 respondents with detailed results for each as follows: The SUS analysis was carried out by transforming the respondents' answers into SUS scores. The results of the SUS value were 79.5, which was included in the 'Good' category. Based on SUS analysis, the design process includes 5 stages of design thinking, making it easier for researchers to explore problems and needs and find user-oriented solutions. In testing the prototype, the analysis used is the System Usability Scale (SUS). And the results above, the system testing can provide an overview of the need to use the system correctly and systematically. Based on table 6, the average SUS score is 79.5, which is included in the 'Good' category. Based on SUS analysis, system testing can provide an overview of the need to use the system correctly and systematically. 4. CONCLUSION After carrying out a series of studies, including literature and field studies at Rania Farma Pharmacy, from the analysis process to the results, we can conclude that UI/UX design of a mobile-based pharmacy application using the design thinking method is successful. The design process includes 5 stages of design thinking, making it easier for researchers to explore problems and needs and find user-oriented solutions. In testing the prototype, the analysis used is the System Usability Scale (SUS). And the results of the SUS value were 79.5, which was included in the Good category. Therefore the design of the Pharmacy application can be used and developed to become a better and more useful system for pharmacies. 5. REFERENCES * Corresponding author Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. * Corresponding author Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
{"Source-Url": "https://jurnal.itscience.org/index.php/CNAPC/article/download/2811/2183", "len_cl100k_base": 5134, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 31564, "total-output-tokens": 7278, "length": "2e12", "weborganizer": {"__label__adult": 0.00255584716796875, "__label__art_design": 0.09716796875, "__label__crime_law": 0.002124786376953125, "__label__education_jobs": 0.0584716796875, "__label__entertainment": 0.0005230903625488281, "__label__fashion_beauty": 0.001922607421875, "__label__finance_business": 0.005367279052734375, "__label__food_dining": 0.00347137451171875, "__label__games": 0.0030307769775390625, "__label__hardware": 0.00554656982421875, "__label__health": 0.0294952392578125, "__label__history": 0.002105712890625, "__label__home_hobbies": 0.0006690025329589844, "__label__industrial": 0.00315093994140625, "__label__literature": 0.0023365020751953125, "__label__politics": 0.0007042884826660156, "__label__religion": 0.001854896545410156, "__label__science_tech": 0.09869384765625, "__label__social_life": 0.00030803680419921875, "__label__software": 0.0299224853515625, "__label__software_dev": 0.64599609375, "__label__sports_fitness": 0.0010900497436523438, "__label__transportation": 0.0024204254150390625, "__label__travel": 0.000995635986328125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28097, 0.02912]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28097, 0.18949]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28097, 0.87349]], "google_gemma-3-12b-it_contains_pii": [[0, 4315, false], [4315, 9474, null], [9474, 12234, null], [12234, 16124, null], [16124, 17085, null], [17085, 19311, null], [19311, 20474, null], [20474, 21688, null], [21688, 24709, null], [24709, 28097, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4315, true], [4315, 9474, null], [9474, 12234, null], [12234, 16124, null], [16124, 17085, null], [17085, 19311, null], [19311, 20474, null], [20474, 21688, null], [21688, 24709, null], [24709, 28097, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28097, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28097, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28097, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28097, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28097, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28097, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28097, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28097, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28097, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28097, null]], "pdf_page_numbers": [[0, 4315, 1], [4315, 9474, 2], [9474, 12234, 3], [12234, 16124, 4], [16124, 17085, 5], [17085, 19311, 6], [19311, 20474, 7], [20474, 21688, 8], [21688, 24709, 9], [24709, 28097, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28097, 0.2377]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
7411eb36433b92a12a9b591f4bcf3ac39f19a54c
Midterm I February 15th, 2024 CS162: Operating Systems and Systems Programming General Information: This is a closed book exam. You are allowed 1 page of notes (both sides). You have 110 minutes to complete as much of the exam as possible. Make sure to read all of the questions first, as some of the questions are substantially more time consuming. Write all of your answers directly on this paper. Make your answers as concise as possible. On programming questions, we will be looking for performance as well as correctness, so think through your answers carefully. If there is something about the questions that you believe is open to interpretation, please ask us about it! <table> <thead> <tr> <th>Problem</th> <th>Possible</th> <th>Score</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>20</td> <td></td> </tr> <tr> <td>2</td> <td>18</td> <td></td> </tr> <tr> <td>3</td> <td>22</td> <td></td> </tr> <tr> <td>4</td> <td>24</td> <td></td> </tr> <tr> <td>5</td> <td>16</td> <td></td> </tr> <tr> <td>Total</td> <td>100</td> <td></td> </tr> </tbody> </table> 3.14159265358979323846264338327950288419716939937510582097494459230781640628620899 Problem 1: True/False [20 pts] Please *EXPLAIN* your answer in TWO SENTENCES OR LESS (Answers longer than this may not get credit!). Also, answers without an explanation *GET NO CREDIT*. **Problem 1a[2pts]:** Because the “monitor” pattern of synchronization involves sleeping inside a critical section, it requires a special hardware support to avoid deadlock (i.e. a permanent lack of forward progress). - [ ] True [ ] False Explain: **Problem 1b[2pts]:** In UNIX, Pipes are implemented with a buffer in user space. - [ ] True [ ] False Explain: **Problem 1c[2pts]:** Any two processes in a running system have a common ancestor process. *Clarification during exam: a process is considered an ancestor of itself.* - [ ] True [ ] False Explain: **Problem 1d[2pts]:** After the main thread in a process calls the `exit()` system call (either implicitly or explicitly), the other threads are allowed to run to completion before the process terminates. - [ ] True [ ] False Explain: **Problem 1e[2pts]:** A user-level library implements each system call by executing a “transition to kernel mode” instruction followed by a procedure call to an appropriate system call handler in the kernel. To make this work, the user-level library must be built with access to a symbol table that contains the kernel addresses of each system call. - [ ] True [ ] False Explain: Problem 1f[2pts]: Not every call to `fread()` must make a call the kernel using the `read()` system call to retrieve data from the underlying file. - True - False Explain: Problem 1g[2pts]: The `test&set` instruction consists of two independent operations: (1) read the value from memory and (2) replace it with the value “1”. Thus, it must be used in a loop to protect against the case in which multiple threads execute `test&set` simultaneously and end up interleaving these two operations from different threads. - True - False Explain: Problem 1h[2pts]: It is possible for a client machine (with IP address X) and server machine (with IP address Y) to have multiple simultaneous but independent communication channels between them – despite the fact that network packets get intermixed in the middle. - True - False Explain: Problem 1i[2pts]: When a process issues a system call in Pintos, the kernel must change the active page table (i.e., update the page table base register) so that the system call handler can access kernel memory. - True - False Explain: Problem 1j[2pts]: A user-level application can build an implementation of lock `acquire()` and `release()` by using the interrupt enable and disable functions of the pthread library. - True - False Explain: Problem 2: Multiple Choice [18pts] Problem 2a[2pts]: Which of the following statements about Base&Bound style address translation are true? (choose all that apply): A:□Base&Bound cannot protect kernel memory from being read by user programs. B:□Sharing memory between processes is difficult. C:□Context switches incur a much higher overhead compared to use of a page table. D:□Base&Bound will lead to external fragmentation. E:□With Base&Bound, each process can have its own version of address “0”. Problem 2b[2pts]: What are some things that can cause a transfer from user mode to kernel mode? (choose all that apply): A:□User code divides by zero. B:□The user executes a system call. C:□A packet is received from the network. D:□The application uses malloc() to allocate memory from the heap. E:□The timer goes off. Problem 2c[2pts]: Consider the following pseudocode implementation of a lock_acquire(). ```c lock_acquire() { interrupt_disable(); if (value == BUSY) { put thread on wait queue; Go to sleep(); } else { value = BUSY; } interrupt_enable(); } ``` Which of the following are TRUE? Assume we are running on a uniprocessor/single-core machine. (choose all that apply): A:□For this implementation to be correct, we should call interrupt_enable() before sleeping. B:□For this implementation to be correct, we should call interrupt_enable() before putting the thread on the wait queue. C:□For this implementation to be correct, sleep() should trigger the scheduler and the next scheduled thread should enable interrupts. D:□It is possible for this code to be run in user mode. E:□None of the above. Problem 2d[2pts]: Which of the following are true about semaphores (choose all that apply): A: ☐ Semaphores can be initialized to any 32-bit value in the range -2^{31} to 2^{31}-1 B: ☐ If there is at least one thread sleeping on a given semaphore, then the result of performing a `Semaphore.V()` will be to wake up one of the sleeping threads rather than incrementing the value of the semaphore. C: ☐ Semaphores cannot be used for locking. D: ☐ The interface for `Semaphore.P()` is specified in a way that prevents its implementation from busy-waiting, even for a brief period of time. E: ☐ The pure semaphore interface does not allow querying for the current value of the semaphore. Problem 2e[2pts]: In PintOS, every “user thread” has a chunk of memory that serves as a user-level stack and that is matched one-for-one with a corresponding kernel stack in kernel-protected memory. What is true about this arrangement (choose all that apply): A: ☐ When a user-thread makes a system call that must block (e.g. a read to a file that must be retrieved from disk), the user thread can be put to sleep at any time by pushing CPU state onto its kernel stack and putting that stack onto an appropriate wait queue inside the kernel. B: ☐ The kernel stack is often identified in figures as a “kernel thread” because it represents an independent computational entity that can run at the same time as the user thread. C: ☐ The kernel stack can grow to an arbitrary size (on demand) using virtual memory. D: ☐ The kernel gains safety by the presence of the kernel stack because it does not have to rely on the correctness of the user’s stack pointer register for correct behavior. E: ☐ None of the above. Problem 2f[2pts]: Which of the following statements about processes are true? (choose all that apply): A: ☐ If a process calls `execv()`, it does not need to free any of its allocated heap variables. B: ☐ Using IPC (such as a pipe), a child process is able to share the address of a stack-allocated variable directly with its parent process so that the two processes can communicate directly using load and store instructions. C: ☐ Each process has its own instance of user and kernel memory. D: ☐ If a parent process has multiple running threads at the time of `fork()`, then the child process will have multiple running threads immediately after `fork()`. E: ☐ Immediately after `fork()`, a child has a `duplicate` file descriptor table with pointers to `duplicated` file description structures for files that are open in the parent. Problem 2g[2pts]: Which of the following statements about files are true? (choose all that apply): A: ☐ The same file descriptor number can correspond to different files for different processes. B: ☐ The same file descriptor number can correspond to different files for different threads in the same process. C: ☐ Reserved 0, 1, and 2 (stdin, stdout, stderr) file descriptors cannot be overwritten by a user program. D: ☐ File descriptions keep track of the file offset. E: ☐ An lseek() within one process may be able to affect the writing position for another process. Problem 2h[2pts]: Select all true statements about the x86 Calling sequence (choose all that apply): A: ☐ Parameters are pushed onto the stack in the order that they are declared in the function. B: ☐ Right before the Caller jumps to the Callee function, the ESP must be 16-byte aligned. C: ☐ If padding is necessary for alignment, it should be added at a lower address than the parameters. D: ☐ In the Callee, space is allocated for local variables via subtracting from the ESP. E: ☐ None of the above. Problem 2i[2pts]: Which of the following are true about condition variables? (choose all that apply): A: ☐ cond_wait() can only be used when holding the lock associated with the condition variable. B: ☐ In practice, Hoare semantics are used more often than Mesa semantics. C: ☐ Mesa semantics will lead to busy waiting in cond_wait(). D: ☐ Each condition variable has its own wait queue for sleeping threads. E: ☐ All of the above. [ This page intentionally left blank ] Problem 3: Synchronization [22pts] Consider the following two threads, to be run concurrently in a shared memory (all variables are shared between the two threads): <table> <thead> <tr> <th>Thread A</th> <th>Thread B</th> </tr> </thead> <tbody> <tr> <td>for (i=0; i&lt;5; i++) { x = x + 1; }</td> <td>for (j=0; j&lt;5; j++) { x = x + 1; }</td> </tr> </tbody> </table> Assume a single-processor system, that load and store are atomic, and that x must be loaded into a register before being incremented (and stored back to memory afterwards). Assume that x is initialized to zero before either Thread A or Thread B start. **Problem 3a[3pts]:** Give a concise proof why x≠1 when both threads have completed. *Hint: consider the fact that each store is one greater than the value it loaded, even when threads interleave.* **Problem 3b[2pts]:** What is a critical section and what is the role of locking in enforcing correct behavior for a multithreaded program? In class, we discussed a number of *atomic* hardware primitives that are available on modern architectures. In particular, we discussed “test and set” (TSET), SWAP, and “compare and swap” (CAS). They can be defined as follows (let “expr” be an expression, “&addr” be an address of a memory location, and “M[addr]” be the actual memory location at address addr): ### Test and Set (TSET) ```c int TSET(&addr) { int result = M[addr]; M[addr] = 1; return(result); } ``` ### Atomic Swap (SWAP) ```c int SWAP(&addr, expr) { int result = M[addr]; M[addr] = expr; return (result); } ``` ### Compare and Swap (CAS) ```c bool CAS(&addr, expr1, expr2) { if (M[addr] == expr1) { M[addr] = expr2; return true; } else { return false; } } ``` Both TSET and SWAP return values from memory, whereas CAS returns either true or false. Note that our &addr notation is similar to a reference in C, and means that the &addr argument must be something that can be stored into. For instance, TSET could implement a spin-lock acquire as follows: ```c int lock = 0; // lock is free while (TSET(&lock)); // Later: acquire lock ``` **Problem 3c[2pts]:** Show how to implement a spinlock acquire() with a single while loop using CAS instead of TSET. Fill in the arguments to CAS below. *Hint: need to wait until can grab lock.* ```c void acquire(int *mylock) { while (!CAS(mylock, _________________, _______________)); } ``` **Problem 3d[3pts]:** In class we argued that spinlocks were a bad idea because they can waste a lot of processor cycles (busy waiting). There is, however, one case we mentioned in which spinlocks would be more efficient than blocking locks. When is that? Can you say why the spinlock of Problem 3c might be even better in this circumstance than the standard spinlock built with TSET? **Problem 3e[2pts]:** Fill in the blanks, to construct a lock-free implementation of a singly-linked list “push” operation. *Hint: We need to retry if someone changes the root point out from under us.* ```c typedef struct node { node_t *next; data_t mydata; } node_t; void push(node_t **rootp, node_t *newnode) { do { node_t *oldfront = *rootp; newnode->next = oldfront; } while (!CAS(rootp, _________________, _______________)); } ``` // Sample usage: node_t *root = NULL; // Empty list push(&root, mynewnode); // push new node onto list The issue with constructing locks using only user-level atomic instructions such as TSET or CAS is that we cannot block a thread (i.e. put it to sleep) when it is unable to acquire a lock. As discussed in class, `Futex()` is a *system call* that allows a thread to put itself to sleep, under certain circumstances, on a queue associated with a user-level address. It also allows a thread to ask the kernel to wake up threads that might be sleeping on the same queue. Recall that the function signature for `Futex` is: ```c int futex(int *uaddr, int futex_op, int val); ``` In this problem, we focus on two `futex_op` values: 1. For `FUTEX_WAIT`, the kernel checks atomically as follows: ```c if (*uaddr == val): the calling thread will sleep and another thread will start running if (*uaddr != val): the calling thread will keep running, i.e. `futex()` returns immediately ``` 2. For `FUTEX_WAKE`, this function will wake up to `val` waiting threads. You can assume in this problem that `val` will always be <= the actual number of waiting threads. **Problem 3f[2pts]:** Fill out the missing blanks, in the `acquire()` function, below, to make a version of a lock that does not busy wait. The corresponding `release()` function is given for you: ```c void acquire(int *thelock) { // Acquire a lock while (TSET(thelock)) { futex(thelock, ____________________, ____________________); } } void release(int *thelock) { // Release a lock *thelock = 0; futex(thelock, FUTEX_WAKE, 1); } ``` **Problem 3g[2pts]:** Under which circumstances is the lock implementation in **Problem 3f** suboptimal enough to look for a different implementation? **Problem 3h[3pts]:** To address the problem in **Problem 3g**, we can build a lock with three states: ```c typedef enum { UNLOCKED, LOCKED, CONTESTED } Lock; Lock mylock = UNLOCKED; // Initialize the lock in UNLOCKED state ``` Explain why we might want three states (rather than just LOCKED and UNLOCKED). In your explanation, make sure to say what each of the states indicate and under what circumstances the presence of the third state (CONTESTED) allows us to optimize performance. Problem 3i[3pts]: Fill in the missing blanks, below, for the three-state solution, using constants from the Lock enum. Hint: You are optimizing your lock based on the answer to Problem 3h: typedef enum { UNLOCKED, LOCKED, CONTESTED } Lock; Lock mylock = UNLOCKED; // Initialize the lock in UNLOCKED state // Acquire a lock void acquire(Lock *thelock) { // Fast case acquire: if (CAS(thelock, __________________________, __________________________)) return; // Contested acquire: while (SWAP(thelock, __________________________) != __________________________) futex(thelock, FUTEX_WAIT, CONTESTED); } // Release a lock void release(Lock *thelock) { if (SWAP(thelock, __________________________) == __________________________) futex(thelock, FUTEX_WAKE, 1); } Problem 4: Short Answer Potpourri [24pts] For the following questions, provide a concise answer of NO MORE THAN 2 SENTENCES per sub-question (or per question mark), unless instructed otherwise. Problem 4a[3pts]: What happens when an interrupt occurs? What does the interrupt controller do? Problem 4b[3pts]: The linked list nodes you have seen before (in 61B) stored each value alongside the previous and next pointers. However, this is not the case in PintOS. Below are the definition of the list_elem and list structures for PintOS: ```c struct list_elem { struct list_elem *prev; struct list_elem *next; }; struct list { struct list_elem head; struct list_elem tail; }; ``` Provide justification for why PintOS lists are implemented this way. In an actual list built using these definitions, where are the related values stored? Problem 4c[3pts]: What needs to be saved and restored on a context switch between two threads in the same process? What if the two threads are in different processes? Be explicit. Problem 4d[2pts]: What was the problem with the Therac-25 radiation therapy machine? Your answer should involve one of the topics of the class. Problem 4e[2pts]: What is “hyperthreading”? Does it allow threads to execute “concurrently” or “in parallel”? Explain. Problem 4f[2pts]: When a process thread executes fork(), this creates a new child process with an address space that is separate from the parent and that has identical contents to the parent process. Does an implementation of fork() require the operating system to copy all of the data of the parent process into the child process? Explain. Problem 4g[2pts]: When handling PintOS syscalls in userprog/syscall.c, how can we tell what syscall the user called, since there is only one syscall_handler function? **Problem 4h[4pts]:** Your friend tells you that they can open a file once but read it twice. Although you are skeptical, you decide to give it a try. Without utilizing another call to open, finish the following `double_read()` function. This function should read the specified file twice, storing the appended result as a duplicated, null-terminated string in the given buffer ‘buffer’. Note: *You must actually read() the file contents twice – do not just copy the data after reading the file once!* In the following, assume that all system calls succeed and that the buffer is big enough to hold the result. Read the file in a maximum of `CHUNK_SIZE` bytes at a time. *While you do not necessarily need to use every line here, you are also not allowed to add semicolons to existing lines.* ```c #define CHUNK_SIZE 1024 void double_read(char *filename, char *buffer) { int fd = open(filename, O_RDONLY); ______________________________; ______________________________; for (_________; __________; __________) { while(1) { ______________________________; if (_____________________) break; ______________________________; } } ______________________________; close(fd); } ``` **Problem 4i[3pts]:** The “high-level” file I/O interface (i.e. `fopen()`, `fclose()`, `fread()`, `fwrite()`, and other associated functions) is often called the “streaming I/O interface” in contrast to the “low-level” interface (i.e. `open()`, `close()`, `read()`, `write()`). Why is this terminology/distinction appropriate? (*Hint: start by explaining what a streaming communication pattern might is.* Problem 5: Stock Trading [16] Stock trading is a very dynamic process with multiple traders simultaneously issuing buy and sell requests. As a result, any system that supports trading must deal with synchronization to provide correct behavior. In this problem, we will build a system to match sell requests to buyers. One essential element of our solution is that each particular stock has a **match_queue** that is used for coordinating the sales of that stock. We will build a fully synchronized solution using the monitor pattern with pthreads (i.e. using pthread_mutex_t and pthread_cond_t variables). Both sellers and buyers will be put to sleep while their corresponding trades are matched and executed. **Problem 5a[2pts]:** We will represent a sell request with a **sell_request_t** structure and the queue of pending sell requests with a **match_queue_t** structure. Complete the following definitions. *Assume that we will use a PintOS list to link sell requests into the match_queue_t structure.* Further assume that buyers will sleep on one condition variable and sellers will sleep on a separate one, both of which are stored within the match_queue. Do not add any semicolons! ```c typedef struct sell_request { int waiting_sell; // Remaining # shares for this seller struct list_elem mylink; // Link for PintOS list } sell_request_t; typedef struct match_queue { char *stock_symbol; // String describing stock int waiting_buy; // Number waiting buyers // Additional fields... } match_queue_t; ``` **Problem 5b[2pts]:** Complete the following match_queue allocator, assuming calloc succeeds). Please do not add any semicolons. Error codes for syscalls and pthread functions do not need to be checked. ```c match_queue_t *match_queue_alloc(char *stock_symbol) { match_queue_t *new_match_queue = (match_queue_t *)calloc(1, sizeof(match_queue_t)); new_match_queue->stock_symbol = stock_symbol; // Additional fields... return new_match_queue; } ``` Problem 5c[6pts]: Fill in the missing blanks for the buyer’s routine, below. Assume that buy_stock_match() should not return until the total requested shares have been matched with a seller. However, busy-waiting is strictly forbidden. Make sure that sellers (represented by sell_request_t structures with a non-zero sell_waiting value are handled strictly in order. Error codes for syscalls and pthread functions do not need to be checked. Note: The matchq argument represents a pointer that has gone through your match_queue_alloc() routine, so should be properly initialized. While you do not necessarily need to use every line here, you are also not allowed to add semicolons to existing lines. Hint: The buyers and sellers work together. Thus, as you will handle in Problem 3d, sellers will sleep waiting for their shares to be completely matched with buyers. And buyers will sleep waiting for enough shares to become available. // Buy num_shares of a stock through the given match_queue // Assume num_shares > 0 and matchq is valid (from match_queue_alloc()) void buy_stock_match(match_queue_t *matchq, int num_shares) { while(num_shares) { if (list_empty(___________________________________________)) { matchq->waiting_buy += 1; ______________________________________________________________; ______________________________________________________________; ______________________________________________________________; ______________________________________________________________; matchq->waiting_buy -= 1; } else { struct list_elem *fe = list_pop_front(__________________________); sell_request_t *first = list_entry(_______________, ___________________, ________________); if (first->waiting_sell > num_shares) { ______________________________________________________________; ______________________________________________________________; ______________________________________________________________; ______________________________________________________________; } else { ______________________________________________________________; ______________________________________________________________; ______________________________________________________________; ______________________________________________________________; } } } } Problem 5d[4pts]: Fill in the missing blanks for the seller’s routine. Assume that calloc() always succeeds and that the seller should be put to sleep until all of their stock shares have been matched to buyers. Make sure that new sellers are not allowed to jump in front of existing sellers. Error codes for syscalls and pthread functions do not need to be checked. Note: The matchq argument represents a pointer that has gone through your match_queue_alloc() routine, so should be properly initialized. Busy-waiting is not allowed. Further, you should make sure that there are no memory leaks (it is up to the seller to free any sell structures they have allocated). While you do not necessarily need to use every line here, you are also not allowed to add semicolons to existing lines. ```c // Sell num_shares of a stock through the given match_queue // Assume num_shares > 0 and matchq is valid (from match_queue_alloc()) void sell_stock_match(match_queue_t *matchq, int num_shares) { sell_request_t *req = (sell_request_t *)calloc(1, sizeof(sell_request_t)); req->waiting_sell = num_shares; list_push_back(_______________________, _________________________); if (matchq->waiting_buy){ _________________________________; } while (______________________________){ _________________________________; } _________________________________; _________________________________; _________________________________; } ``` Problem 5e[2pts]: Suppose that many sellers show up all at once, before buyers arrive. Then, suppose buyers arrive one at a time. Although still correct, the above solution will incur a lot of scheduler overhead. Explain the problem in three sentences or less and how you would fix it. [ This page intentionally left blank ] [Function Signature Cheat Sheet] /**************************** pThreads ****************************/ int pthread_create(pthread_t *thread, const pthread_attr_t *attr, void *(*start_routine) (void *) , void *arg); int pthread_join(pthread_t thread, void **retval); int pthread_mutex_init(pthread_mutex_t *mutex); int pthread_mutex_lock(pthread_mutex_t *mutex); int pthread_mutex_unlock(pthread_mutex_t *mutex); int pthread_cond_init(pthread_cond_t *cond); int pthread_cond_wait(pthread_cond_t *cond, pthread_mutex_t *mutex); int pthread_cond_signal(pthread_cond_t *cond); int pthread_cond_broadcast(pthread_cond_t *cond); /**************************** Processes ****************************/ pid_t fork(void); pid_t wait(int *status); pid_t waitpid(pid_t pid, int *status, int options); int execv(const char *path, char *const argv[]); /**************************** High-Level I/O ****************************/ FILE *fopen(const char *path, const char *mode); FILE *fdopen(int fd, const char *mode); size_t fread(void *ptr, size_t size, size_t nmemb, FILE *stream); size_t fwrite(const void *ptr, size_t size, size_t nmemb, FILE *stream); int fclose(FILE *stream); /**************************** Sockets ****************************/ int socket(int domain, int type, int protocol); int bind(int sockfd, struct sockaddr *addr, socklen_t addrlen); int listen(int sockfd, int backlog); int accept(int sockfd, structure sockaddr *addr, socklen_t addrlen); int connect(int sockfd, struct sockaddr *addr, socklen_t addrlen); ssize_t send(int sockfd, const void *buf, size_t len, int flags); /**************************** Low-Level I/O ****************************/ int open(const char *pathname, int flags); ssize_t read(int fd, void *buf, size_t count); ssize_t write(int fd, const void *buf, size_t count); off_t lseek(int fd, off_t offset, int whence); whence=>SEEK_SET, SEEK_CUR, or SEEK_END int dup(int oldfd); int dup2(int oldfd, int newfd); int pipe(int pipefd[2]); int close(int fd); ### [Function Signature Cheat Sheet (con’t)] ```c /*************************************** Pintos ***********************************/ void list_init(struct list *list); struct list_elem *list_head(struct list *list) struct list_elem *list_tail(struct list *list); struct list_elem *list_begin(struct list *list); struct list_elem *list_next(struct list_elem *elem); struct list_elem *list_end(struct list *list); struct list_elem *list_remove(struct list_elem *elem); bool list_empty(struct list *list); #define list_entry(LIST_ELEM, STRUCT, MEMBER) ... void list_insert(struct list_elem *before, struct list_elem *elem); void list_push_front(struct list *list, struct list_elem *elem); void list_push_back(struct list *list, struct list_elem *elem); struct list_elem *list_pop_front(struct list *list); struct list_elem *list_push_front(struct list *list); void sema_init(struct semaphore *sema, unsigned value); void sema_down(struct semaphore *sema); void sema_up(struct semaphore *sema); void lock_init(struct lock *lock); void lock_acquire(struct lock *lock); void lock_release(struct lock *lock); ``` [Scratch Page: Do not put answers here!] [Scratch Page: Do not put answers here!]
{"Source-Url": "https://cs162.org/static/exams/sp24-mt1.pdf", "len_cl100k_base": 6785, "olmocr-version": "0.1.53", "pdf-total-pages": 24, "total-fallback-pages": 0, "total-input-tokens": 48053, "total-output-tokens": 7927, "length": "2e12", "weborganizer": {"__label__adult": 0.0005898475646972656, "__label__art_design": 0.0005011558532714844, "__label__crime_law": 0.000514984130859375, "__label__education_jobs": 0.0193634033203125, "__label__entertainment": 0.00014138221740722656, "__label__fashion_beauty": 0.00027441978454589844, "__label__finance_business": 0.00025343894958496094, "__label__food_dining": 0.0008025169372558594, "__label__games": 0.0017852783203125, "__label__hardware": 0.002593994140625, "__label__health": 0.0006508827209472656, "__label__history": 0.0005550384521484375, "__label__home_hobbies": 0.0002377033233642578, "__label__industrial": 0.000919342041015625, "__label__literature": 0.0005316734313964844, "__label__politics": 0.0004727840423583984, "__label__religion": 0.0008358955383300781, "__label__science_tech": 0.0340576171875, "__label__social_life": 0.00029921531677246094, "__label__software": 0.0065460205078125, "__label__software_dev": 0.92578125, "__label__sports_fitness": 0.000606536865234375, "__label__transportation": 0.0011548995971679688, "__label__travel": 0.00035190582275390625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28752, 0.0099]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28752, 0.38381]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28752, 0.80861]], "google_gemma-3-12b-it_contains_pii": [[0, 929, false], [929, 1012, null], [1012, 2392, null], [2392, 3683, null], [3683, 5343, null], [5343, 7878, null], [7878, 9386, null], [9386, 9425, null], [9425, 10309, null], [10309, 12740, null], [12740, 14910, null], [14910, 15716, null], [15716, 16751, null], [16751, 17525, null], [17525, 19190, null], [19190, 19190, null], [19190, 21202, null], [21202, 23767, null], [23767, 25526, null], [25526, 25565, null], [25565, 27560, null], [27560, 28671, null], [28671, 28712, null], [28712, 28752, null]], "google_gemma-3-12b-it_is_public_document": [[0, 929, true], [929, 1012, null], [1012, 2392, null], [2392, 3683, null], [3683, 5343, null], [5343, 7878, null], [7878, 9386, null], [9386, 9425, null], [9425, 10309, null], [10309, 12740, null], [12740, 14910, null], [14910, 15716, null], [15716, 16751, null], [16751, 17525, null], [17525, 19190, null], [19190, 19190, null], [19190, 21202, null], [21202, 23767, null], [23767, 25526, null], [25526, 25565, null], [25565, 27560, null], [27560, 28671, null], [28671, 28712, null], [28712, 28752, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 28752, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28752, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28752, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28752, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28752, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28752, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28752, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28752, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, true], [5000, 28752, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28752, null]], "pdf_page_numbers": [[0, 929, 1], [929, 1012, 2], [1012, 2392, 3], [2392, 3683, 4], [3683, 5343, 5], [5343, 7878, 6], [7878, 9386, 7], [9386, 9425, 8], [9425, 10309, 9], [10309, 12740, 10], [12740, 14910, 11], [14910, 15716, 12], [15716, 16751, 13], [16751, 17525, 14], [17525, 19190, 15], [19190, 19190, 16], [19190, 21202, 17], [21202, 23767, 18], [23767, 25526, 19], [25526, 25565, 20], [25565, 27560, 21], [27560, 28671, 22], [28671, 28712, 23], [28712, 28752, 24]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28752, 0.02676]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
98ead0aac751c9a404b0dc0a2e9988b942b6757e
[REMOVED]
{"Source-Url": "http://lnu.diva-portal.org/smash/get/diva2:623688/FULLTEXT02", "len_cl100k_base": 7734, "olmocr-version": "0.1.50", "pdf-total-pages": 22, "total-fallback-pages": 0, "total-input-tokens": 50919, "total-output-tokens": 12716, "length": "2e12", "weborganizer": {"__label__adult": 0.0003688335418701172, "__label__art_design": 0.0004730224609375, "__label__crime_law": 0.00033402442932128906, "__label__education_jobs": 0.0012922286987304688, "__label__entertainment": 6.121397018432617e-05, "__label__fashion_beauty": 0.00018477439880371096, "__label__finance_business": 0.00023996829986572263, "__label__food_dining": 0.000274658203125, "__label__games": 0.0006456375122070312, "__label__hardware": 0.000522613525390625, "__label__health": 0.0003893375396728515, "__label__history": 0.0002980232238769531, "__label__home_hobbies": 8.690357208251953e-05, "__label__industrial": 0.0003161430358886719, "__label__literature": 0.0003025531768798828, "__label__politics": 0.00032782554626464844, "__label__religion": 0.0003561973571777344, "__label__science_tech": 0.010894775390625, "__label__social_life": 9.047985076904296e-05, "__label__software": 0.004650115966796875, "__label__software_dev": 0.97705078125, "__label__sports_fitness": 0.0002913475036621094, "__label__transportation": 0.00043892860412597656, "__label__travel": 0.00019752979278564453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 49324, 0.04048]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 49324, 0.38497]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 49324, 0.85961]], "google_gemma-3-12b-it_contains_pii": [[0, 614, false], [614, 2591, null], [2591, 4570, null], [4570, 7281, null], [7281, 7914, null], [7914, 10229, null], [10229, 12762, null], [12762, 15517, null], [15517, 17176, null], [17176, 18748, null], [18748, 22797, null], [22797, 24478, null], [24478, 24843, null], [24843, 26780, null], [26780, 27451, null], [27451, 29749, null], [29749, 32530, null], [32530, 35615, null], [35615, 39752, null], [39752, 43575, null], [43575, 46692, null], [46692, 49324, null]], "google_gemma-3-12b-it_is_public_document": [[0, 614, true], [614, 2591, null], [2591, 4570, null], [4570, 7281, null], [7281, 7914, null], [7914, 10229, null], [10229, 12762, null], [12762, 15517, null], [15517, 17176, null], [17176, 18748, null], [18748, 22797, null], [22797, 24478, null], [24478, 24843, null], [24843, 26780, null], [26780, 27451, null], [27451, 29749, null], [29749, 32530, null], [32530, 35615, null], [35615, 39752, null], [39752, 43575, null], [43575, 46692, null], [46692, 49324, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 49324, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 49324, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 49324, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 49324, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 49324, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 49324, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 49324, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 49324, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 49324, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 49324, null]], "pdf_page_numbers": [[0, 614, 1], [614, 2591, 2], [2591, 4570, 3], [4570, 7281, 4], [7281, 7914, 5], [7914, 10229, 6], [10229, 12762, 7], [12762, 15517, 8], [15517, 17176, 9], [17176, 18748, 10], [18748, 22797, 11], [22797, 24478, 12], [24478, 24843, 13], [24843, 26780, 14], [26780, 27451, 15], [27451, 29749, 16], [29749, 32530, 17], [32530, 35615, 18], [35615, 39752, 19], [39752, 43575, 20], [43575, 46692, 21], [46692, 49324, 22]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 49324, 0.31502]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
6c15ca90bb5456b28632607c338c275ae4e04c60
Improvement and Analysis of Encryption and Decryption Performance in Cloud Document Yonglong Zhuang, Xiaolan Weng, and Yuwu Wang Huaiyin Normal University, Huai’an 223300, PR China Email: zhuangyongl@yeah.net, wxl@hytc.edu.cn, wyw@hytc.edu.cn Haiyang Zhuang Department of Civil and Environmental Engineering, University of Southern California, CA, USA Email: OCEAN_JOE@hotmail.com Abstract—Cloud computing is the hot topic in recent years, and the cloud-based collaborative service is the emerging service. No matter where the users are or which computers the users use, it enables the cloud users to place their data into the cloud. As long as it connects to the Internet, it can store and get the data placed in the cloud. In the collaborative service situation, it can allow other cloud users in the group to share with the private information, such as the editing service in the on-line common file can make many collaborators jointly write a file. It encrypted and decrypted the whole file in the past in order to protect the data privacy, but it was rather time consuming in the multi-person collaboration. The paper analyzes the text editing in the collaborative service and introduces the rbTree-Doc framework in order to reduce the encrypted data number. Although it increases the cost of establishing and maintaining the rbTree-Doc, the experimental results show that the rbTree-Doc framework can make collaborators conduct the text editing function. When it conducts the insert, remove operations and the encrypted/decrypted algorithms, it adopts 3DES encryption algorithm to improve its 31.04% efficiency, and adopting AES encryption algorithm to improve its 23.94% efficiency. Index Terms—Cloud Computing, Data Privacy, Collaborative Service, Red-Black Tree, Encryption Algorithm. I. INTRODUCTION Nowadays many manufactures have launched the cloud text editing service, such as Google Docs, Microsoft Office Live, Zoho and Adobe Buzz Word. These cloud services not only have the intact function, but also can be provided to users for free. On the one hand, the cloud computing can reach a file portability in the application, and then users can place the data into the cloud. Therefore, no matter in the office or the outdoor, or which computer or cell phone is used, it stores and gets the data placed in the cloud as long as it connects to the Internet. On the other hand, personal computer’s equipment function does not need to be very strong with the use of the cloud to deal with computing and storage assignments. It does not carry the complex operation system and application procedure any more, and it does not need the computer equipment’s management maintenance personnel. In the future, the development of computer equipment will be simplified and cheap. The online Office Suite based on Web application procedure is becoming very popular. Users not only easily use the network browsers to collaborate a document with ease. The cloud-based collaborative service is the emerging service. It not only allows other cloud users in the group to share with the private information, such as online common text editing and calendar sharing, but also offers its availability and the common stored/shared state. The collaborative service deployed in the cloud platform has many advantages, such as the global accessibility, high availability, high tolerance, flexible resource distribution and extension. However, these advantages should be established in the cloud service provider which should be completely trusted. In fact, the information concentration makes the cloud service provider become the attack object. It does not only possibly to cause the malicious intrusion behavior, but also produce the risk doubts, that is, the privacy information is leaked by the servers[1]. On the other hand, users do not really possess the information. It can deal with the data through the cloud. Therefore, users are worried to leak or lose the data, and the problem may be an important barrier to the cloud service development. The premise of the paper is that the cloud users cannot completely believe the cloud service providers, while the usual data privacy preserving method adopts the modern cryptography technology to encrypt user’s sensitive data, and then places them into the cloud. If it does not possess the decryption key, it cannot get the contents of the sensitive data[2-4]. If users use the text editing collaborative service, the shared document can be a new blank document. It will commonly create an intact document from scratch, but the owner and the co-author of the document has the encryption and decryption key. Users adopt the encryption key to encrypt the whole document in the editing process, and then upload the cipher-text to the cloud in order update and store the cipher text. In order to ensure that all documents possessed by the collaborators are always the latest state, the cloud service will proactively notice and simultaneously open the collaborators of its document. © 2013 ACADEMY PUBLISHER doi:10.4304/jnw.8.9.2071-2077 After downloading them into the host, it restores the whole document with the use of the decryption key. The consumed time in the encryption/decryption process is proportional to the size of the data. When the document is becoming mature and intact, its occupied space will also be increasing. Although it just modifies a small part of the content, it also encrypts the whole document. All collaborators must download the whole cipher-text and then decrypt it to get the latest document. In the multi-people collaborative situation, the frequent modified documents equal to produce many encryption/decryption computations. It needs to consume much encryption/decryption time with the increasing of the data size in the document. The paper analyzes the text editing in the collaborative service, it divides the text inside the document into many blocks, and then maintains the corresponding position in all blocks with the use of red-black tree. When the text inside a certain block is altered, it just needs to encrypt and update its block. Finally, it makes the users who are doing the text editing function, such as insert/remove operations and encryption/decryption computations, locate in the highly efficient and no inductive state. II. RELATED WORK The chapter firstly explains the cloud computing definition, and then discusses the related papers which store the cloud data privacy. Finally, it introduces the red-black technology. A. Cloud Computing The popularization of the network and the mature of the virtual technology offer the cloud computing a good environment. Cloud computing represents a new information framework. It automatically divides the huge computing procedure into many small sub-procedures through the distributed computing. It deals with the computing distributed in the mass computers, and then passes the result back to the client. The server provider offers procedure computing, software application and data storage ability so that users can ubiquitously store and get them. Finally, data and server storage is distributed on the large data center established by the server provider. It can get the needed information and service by connecting user's equipment to the network. According to the American national standards and technology association (NIST), it defines the present cloud computing standard as the following: 5 basic features, 4 deployed models and 3 server models. 5 basic features are as the follows: (1) On-demand self-service (2) Ubiquitous network service access with any devices (3) Rapid elasticity of deploying (4) Location independent resource pooling (5) Measured Service 4 deployed models are as follows: (1) Private Cloud: Private cloud is established by itself or is managed by the third party, and especially offers service. If it requires high data confidentiality, it can use the private cloud. (2) Community Cloud: Community Cloud is established by multi groups or is managed by the third party, and especially offers service. It is suitable for academic unit required to share with the research data to establish it. (3) Public Cloud: Public Cloud is possessed by a certain cloud server provider, and is open to the public or enterprise groups to offer cloud service. It is mostly suitable for the start-up business or small-medium enterprise whose data confidentiality is low. (4) Hybrid Cloud: Hybrid Cloud is mixedly used by the above two or more clouds. It should ensure the data and application procedure's portability in different platforms on the basis of standards or with the use of the new technology. 3 service models are as follows: (1) Platform as a service (PaaS): PaaS offers computing, storage, network and other resource's renting service, but users cannot contact with the hardware in essence. IaaS offers multi virtual machines into a certain host inside the computer room, and it can change the CPU number and storage space in terms of user's request. (2) Software as a service (SaaS): SaaS provides software for users to use, and it is unnecessary for users to download or install any procedures. Users can directly adopt SaaS provider's application procedures by browsers or other specific tools connecting with the network. The Software-on-demand and Application-on-demand allows users to rent server provider's required application software's through the internet in terms of the real request. B. Cloud Information Privacy In order to solve the cloud information privacy problem, [2-4] are mainly focus on the policy protocol, and offers the cloud server provider a ideal execution policy. [6] in the experimental result shows that it can withstand Denial of Service (DoS) attack. The above situations are still on the premise of trusting cloud server provider. It can increase the existing cloud server's usage limitation when it executes these safe policies. Different from formulating the safe policy, [2-4] assumes that users adopts the modern cryptology to encrypt sensitive data without trusting the cloud server provider completely, and then place the data into the cloud. If it does not possess the decryption key, it cannot get the contents of the sensitive data. [4] [7] regards Google Docs cloud server as the experimental scale, it designs a plug-in and then install it in the browsers. After downloading Google Docs document into the local computer, it will decrypt the document through the plug-in and then provide the decrypted document for users to check and edit. During the editing process, it also encrypts the document through the plug-in and then uploads the Google Docs cloud for updating and storing. Although the accomplishment of the method is simple, how do other users shared with the document get the key will be a problem. Therefore, the concept of the attribute-based encryption (ABE) is taken seriously year by year [5] [8] [10]. ABE is similar to the traditional public key infrastructure (PKI), but their difference is that ABE adopts the randomly produced public key. Entity will adopt unique word string which is like the e-mail address as itself public key. ABE mechanism possesses the following 4 features: 1. It is necessary for resource providers to encrypt sensitive data in terms of attributes and unnecessary for them to be informed of group's identity and number so that it can reduce the expense of the data encryption and protect user's privacy. 2. As long as the groups accord with the required cipher-text, it can decrypt data and guarantee the data privacy. 3. User's private key is related to random polynomial or number. Different users' private key cannot be combined so that it can prevent users' collusion attack. 4. ABE possess the flexible access control strategy. It is very suitable for us to apply ABE to the cloud collaborative sharing service for ABE has the above advantages. [9-12] emphasize on improving ABE and introduce the idea of the fine-grained access control, while [7-9] introduce the generation of the key and the method of the distribution. C. Wang etc, in [13-16] thinks that users can save the host hardware space and maintenance cost by storing the data into the cloud, but cloud service provider for all users have the storage of data integrity protection is a challenge, and the reality of the user's data or the hidden damage is not found. Therefore, the literature propose that the third auditory(TPA) should be responsible for helping user check dada's integrity, but it does not hope that the third auditory(TPA) is informed of the data contents stored in the cloud for it can protect user's data privacy. C. Red-Black Tree Red-black tree is a data structure of a binary search tree, which was first published by R. Bayer [17] in 1972. At that time, it was called symmetric binary B-tree. The present usual name was derived from L. J. Guibas and R. Sedgewick [18] in 1978. Red-black tree is an approximate balance tree, and its method is to color the node for ensuring that no maximum path is over twice as much as any other path. Red-black tree has the following 5 properties [19]: 1. Each node's color must be red or black. 2. Each leaf node's color must be black. 3. Root's color must be black. 4. If its parent node's color is red, the two children node's color must be black. 5. Its passed black node numbers must be the same, including any path from the root node to the leaf node. Due to the property (4), it causes Red-black tree cannot have two red nodes connected. Therefore, the shortest possible path has the black node, and the longest possible path is the alternative red and black node. According to the property (5), all paths from the root node to the leaf node has the same number of the black node. It also shows that no path can surpass the twice length of other paths, ad the result is that the tree is generally balanced. There is complex algorithm[20][21]to support red-black tree so that it can guarantee it has good worse situation operating time in the process of search, insert and delete operation. Table 1 lists the time complexity when red-black tree deal with the search, insert and delete operation, n represents the number of the red-black tree node. Owing to its high efficiency, it is usually used in the real-time processing application. <table> <thead> <tr> <th>Operation</th> <th>Time complexity in worse case</th> </tr> </thead> <tbody> <tr> <td>Search</td> <td>O(log n)</td> </tr> <tr> <td>Insert</td> <td>O(log n)</td> </tr> <tr> <td>Delete</td> <td>O(log n)</td> </tr> </tbody> </table> III. PROPOSED SCHEME It develops the present text editing collaborative service mostly based on Asynchronous JavaScript and XML. AJAX [22] is a web development technology widely applied in the browsers, and it can transmit and update the data without updating the whole web. Meanwhile, it can avoid the server sending other unchanged information. It largely reduces the required transmitted data for it is unnecessary to reload the whole website, and then it makes user cannot think there is the process of transmitting the data or refreshing the screen. When client-site executes the link, form, text and other operations, it can produce JavaScript [23] triggering event, and then it transmits these events to the server-side for being solved. When servers respond, the client-side would simultaneously deal with it with the use of JavaScript, and then respond it and update the screen. The paper introduces rbTree-Doc structure. The client-side needs to generate and maintain rbTree-Doc document. For example, users can choose the desired editing document and then download the necessary information to the host-side. Later, it decrypts the document, computes and reestablishes rbTree-Doc. Finally, the contents of the document is shown in the browsers. In the editing process, it synchronously updates rbTree-Doc, and then offers the corresponding node a tag to represent the contents have been changed when the document is executed the insert, remove operations. It just needs to deal with the node text possessing the tag without requiring the whole document, and then encrypts and computes the text. Later, it uploads the text into the cloud and the update it. The next chapter explains the rbTree-Doc data processing procedure in detail. A. System Design The system design is as shown in the figure 1. The cake drawing represents the cloud and the cloud drawing represents the cloud collaborative software. The right icon represents database function. The Rebuilding function means that it can accord with rbTree-Doc form when users choose to upload the existing document. Firstly, the cloud collaborative software (App) will take out user’s desired editing document file from the database, and then record/store all users of its document. When users start up the document, it must execute the Building rbTree-Doc and then reestablish rbTree-Doc. In the meantime, it decrypts the document. In the editing process, it synchronously updates rbTree-Doc through Update_rbTree-Doc. It just encrypts and computes the contents of the changed tag node, and then it uploads and updates them in the cloud. The cloud collaborative software (App) also notices and transmits the necessary information to other collaborators, except updating/storing document. In the process of updating the document, it just needs to update the node and decrypts the updating text contents for users have established rbTree-Doc by themselves. Finally, it restores the document by operating Recovering function. ![Figure 1. The system flow chart in the paper](image) 1. Document Archive Form Users can easily execute the document editing for network browsers can be the same as the traditional desktop text editing software. The process is mainly owing to the HTML editor of (What You See Is What You Get; WYSIWYG). The HTML editor of WYSIWYG can make the effect of the text, figure and other contents in the editing process be the same, such as the effect of the picture displayed on the screen is the same as the effect of the picture printed on the paper. It can offer the intuitional imaging interface (GUI), and can initiatively transform the contents into HTML so that users can easily edit the document without imputing any HTML tags. When users upload the existing document, it must transform the user-define form document archive into the HTML form document archive. Rebuilding function transforms the original HTML form document archive into the content and style sheet; while Recovering function combines the content and style sheet, and then retrieves the original HTML form document. The coding method of the content is UTF16 for it can avoid producing the messy code and the problem of not unscrambling it correctly. The UTF16 coding method is the variable length method. For example, English words occupy 2 bytes, the usual Chinese words occupy 2 bytes, and the frequent used Chinese words occupy 4 bytes. After getting the content during the process of executing Rebuilding, it divides it into many blocks in terms of the pre-setting block length, and then encrypts/computes each block text respectively. Finally, it saves a XML form archive, as shown in the figure 2. In addition, the XML document archive and form archive will be jointly uploaded and stored in the cloud. If block length is set as 5 bytes, it shows that each block just store 5 texts at most. If fill factor is set as 100, it shows that each block will fill 100% texts at most. Generally speaking, it is just set between 75-90 so that each block can leave some space for the future new texts to be stored in its block directly. It does not only reduce the computing number, but also add the number of the block. Encryption algorithm (encrypt-Alg) shows that it can use any encryption algorithms, such as Advanced Encryption Standard (AES) is an usual encryption algorithm. Element_id is the first text position (index) in the whole document in each block, while <data> is the encrypted cipher in each block text. The most important information is element_id and <data> when it establishes rbTree-Doc. 2. Establishing Red-black tree When users firstly opens the document, it can receive the two archives from the cloud collaborative software (App): one is XML document archive, the other is form archive. In order to show the original document contents, it uses the red-black tree and element_id value. The feature of the red-black tree is the auto-balanced binary search tree and can be sorted in terms of the key in the key-value pair. Element_id is the first text position (index) in the whole document in each block, while it also shows that element_id is not only the unique value, but also possesses the sequence so that it is very suitable to be the red-black tree key. Take the figure 2 as an example the reestablished rbTree-Doc has two nodes, the key set is \{0,5\}, and the node content(value) is the decrypted <data> whose element_id is located in the block. B. Data Processing Flow After reestablishing the rbTree-Doc, users may do the following operations to the document, such as insert and remove operation so that it may change partial or the whole data. If the document contents are changed; it can maintain rbTree-Doc or upload partial/the whole nodes. In the process, it must require high efficient and user’s no inductive state, except the document contents should be completely corresponded to rbTree-Doc. 1. Position It is crucial to get document’s precise position which influences the content before realizing the highly efficient document data handling. When the document contents are changed, it can get the mouse cursor position in the document through the program writing. The next step is to find the corresponding rbTree-Doc node in the document changed contents. It is unnecessary to use traversal method to search the key set for rbTree-Doc regard the first text's position in the block document as the key. It as better efficiency if it directly uses the search key set. After getting the corresponding nodes, it should transform the mouse cursor position in the document into the node position in the text block of offset-position- node ID. Position is the mouse cursor position in the document, and node ID is the node key. Therefore, wherever edits it in the document, rbTree-Doc can rapidly find out the corresponding node and its text position, and then do the corresponding operation handling. 2. Insert Operation When the document inserts texts, other nodes are unnecessary to make any variations, except the corresponding nodes in the rbTree-Doc will be influenced. When it happens insert events, it can get the present mouse cursor position in the document, the inserted words, the corresponding node's node ID, its word position (offset), and then update key set. When it happens the insert events, it also increases partial words' position for the key is the first document position in the node block. Therefore, it must give the correct new key = key - word length, that is, delete the position after the words. At this time, it should change the key, and the rbTree-Doc framework is not changed. The insert word is usually a word, also can be a more word, such as copying/pasting a section of words. If the length of the node block plus the word length is not more than the setup block length, it can directly insert the word into the node's word block. If the remaining space of the node block is not enough to be placed into word, it will cut the offset into before and after partial words. At this time, it will fill the before word into a block length. The final insert node and key is still the first word document position, and its content is the remaining word and after. If the length of word is surpassing to a block length, it will divide the word in terms of the block length and then insert the node to store it. If the remaining word length and the after length are surpass to a block length, it should insert a node to store word and after respectively. 3. Remove Operation When it happens the remove events, it can get the present mouse cursor document position and then delete the word length. Firstly, it can get the corresponding first/last influenced node's node ID. If it is the same node ID, it can directly remove the word; otherwise, all related words between the two nodes will be deleted. If the word can not be stores in the node, it will immediately remove its node. Finally, when it happens the remove events, partial word position will be reduces so that it needs to give the correct key new key = key - word Length, that is, delete the position after the words. In addition, the update operation in the word actually is a remove operation plus an insert operation. Therefore, the paper will not describe it in detail. IV. EXPERIMENTAL ENVIRONMENT AND DATA ANALYSIS The paper uses the usual encryption algorithms. It adopts 3DES and AES to do the experiment. The key length of Triple DES (3DES) can be 56, 112 or 168 bits, while the key length of AES can be 128, 192 or 256 bits. Generally speaking, if the key length is longer, the encryption safety is higher, but the encryption time is correspondingly increasing. The paper adopts the highest safety key length, and all encryption algorithms adopt CBC working model and PKCS5Padding filling method. In the process of the experiment, all tests are operated in the Virtual Machine (VM) which has 2.33GHz Core 2 CPU, 2GB memory and Windows XP 32bit. The procedure is written by JAVA (JDK 1.6). The paper does the experiment aiming at the 4 different sizes of the document archives: 1MB, 4MB, 8MB and 12MB. The document content is finished by using the filling method in the JAVA explanation document. <table> <thead> <tr> <th>Encryption algorithm</th> <th>1MB</th> <th>4MB</th> <th>8MB</th> <th>12MB</th> </tr> </thead> <tbody> <tr> <td>3DES Encrypt(ms)</td> <td>187.4</td> <td>618.6</td> <td>1131.4</td> <td>1721.8</td> </tr> <tr> <td>3DES Decrypt(ms)</td> <td>146.8</td> <td>581.2</td> <td>1100</td> <td>1709.2</td> </tr> <tr> <td>AES Encrypt(ms)</td> <td>112.8</td> <td>327.8</td> <td>628</td> <td>915.8</td> </tr> <tr> <td>AES Decrypt(ms)</td> <td>97.2</td> <td>325</td> <td>643.6</td> <td>952.8</td> </tr> </tbody> </table> Table 2 represents the consumed average time when it encrypts/decrypts different sizes of document archives with the use of the encryption algorithm. In the multi-people collaborative situation, it must upload the user's modified document contents into the cloud collaborative service at intervals and then notice other collaborators to update their documents. Take Google Docs as an example, it sets an AJAX as the updating trigger, it will initiatively uploads the changed contents every 30 seconds [24]. If just a small part of the contents are modified, it also can encrypt the whole document. In order to get the latest document, all collaborators still upload the whole encrypted document and then decrypt it. If it is the frequent modified document contents, it equals to produce mass encryption/decryption computing. User’s consumed time in computing the encryption/decryption is still considerable long. According to the idea, the proposed rbTree-Doc in the paper will do the experiment when the word content in the 1 node, 5 nodes, 25 nodes and 125 nodes is changed. In the experimental process, it does not directly insert and delete a word in the document random position until it reaches the setup changed node numbers. Figure 3 is the consumed time in the encryption computing when the block length is set as 1000 bytes. Compared with the table 2, the encryption computing to the whole document has considerable high efficiency and the required transmitting data number in not very large. The result is that collaborators can have more efficiency when updating the document contents. V. CONCLUSION The cloud-based collaborative service is the emerging service. In the past, encrypting/decrypting the whole document method for protecting the data privacy in the multi people collaborative situation were rather time-consuming. The paper analyzes the word editing in the collaborative service and introduces rbTree-Doc framework for reducing the required encrypted data number. Although it increases the cost of establishing and maintaining the rbTree-Doc, the experimental results show that the rbTree-Doc framework can make collaborators conduct the text editing function. When it conducts the insert/remove operations and the encrypted/decrypted computing, it adopts 3DES encrypting algorithm to improve its 31.04% efficiency, and adopting AES encrypting algorithm to improve its 23.94% efficiency. In the future, it hopes that it can reduce the consumed time in analyzing, dismantling, and restoring the word form, and it can reduce the waste of the space in the rbTree-Doc insert/remove mechanism. Merging the word blocks can reduce the node numbers and improve the operating efficiency in the Red-black tree. ACKNOWLEDGEMENT This study was supported in part by the Education Department of Jiangsu province industrialization project JHB2012-53 and Science & Technology Support (industrial) Project HAG2012059 in Huainan city. REFERENCES
{"Source-Url": "http://ojs.academypublisher.com/index.php/jnw/article/download/jnw080920712077/7731", "len_cl100k_base": 6158, "olmocr-version": "0.1.50", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 22592, "total-output-tokens": 8260, "length": "2e12", "weborganizer": {"__label__adult": 0.00035309791564941406, "__label__art_design": 0.0006251335144042969, "__label__crime_law": 0.000896453857421875, "__label__education_jobs": 0.001811981201171875, "__label__entertainment": 0.00016224384307861328, "__label__fashion_beauty": 0.00018775463104248047, "__label__finance_business": 0.0008425712585449219, "__label__food_dining": 0.0003969669342041016, "__label__games": 0.0006618499755859375, "__label__hardware": 0.0017719268798828125, "__label__health": 0.0008220672607421875, "__label__history": 0.0004148483276367187, "__label__home_hobbies": 0.0001354217529296875, "__label__industrial": 0.0005292892456054688, "__label__literature": 0.0006103515625, "__label__politics": 0.0003938674926757813, "__label__religion": 0.00046372413635253906, "__label__science_tech": 0.400390625, "__label__social_life": 0.00019371509552001953, "__label__software": 0.05926513671875, "__label__software_dev": 0.5283203125, "__label__sports_fitness": 0.0002092123031616211, "__label__transportation": 0.0005207061767578125, "__label__travel": 0.00018227100372314453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34354, 0.04221]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34354, 0.67244]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34354, 0.88145]], "google_gemma-3-12b-it_contains_pii": [[0, 5082, false], [5082, 10657, null], [10657, 16780, null], [16780, 21821, null], [21821, 26737, null], [26737, 32901, null], [32901, 34354, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5082, true], [5082, 10657, null], [10657, 16780, null], [16780, 21821, null], [21821, 26737, null], [26737, 32901, null], [32901, 34354, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34354, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34354, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34354, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34354, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34354, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34354, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34354, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34354, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34354, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34354, null]], "pdf_page_numbers": [[0, 5082, 1], [5082, 10657, 2], [10657, 16780, 3], [16780, 21821, 4], [21821, 26737, 5], [26737, 32901, 6], [32901, 34354, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34354, 0.09402]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
21d586db7e4e0e71957974874b7018d37d24f280
[REMOVED]
{"Source-Url": "https://rd.springer.com/content/pdf/10.1007%2F3-540-45545-0_34.pdf", "len_cl100k_base": 5044, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 19734, "total-output-tokens": 6070, "length": "2e12", "weborganizer": {"__label__adult": 0.00029206275939941406, "__label__art_design": 0.0003020763397216797, "__label__crime_law": 0.00030159950256347656, "__label__education_jobs": 0.0004296302795410156, "__label__entertainment": 8.094310760498047e-05, "__label__fashion_beauty": 0.00011932849884033204, "__label__finance_business": 0.0001970529556274414, "__label__food_dining": 0.00023806095123291016, "__label__games": 0.0005784034729003906, "__label__hardware": 0.00319671630859375, "__label__health": 0.00033092498779296875, "__label__history": 0.0002529621124267578, "__label__home_hobbies": 8.7738037109375e-05, "__label__industrial": 0.0005540847778320312, "__label__literature": 0.00016939640045166016, "__label__politics": 0.00018084049224853516, "__label__religion": 0.0003662109375, "__label__science_tech": 0.06866455078125, "__label__social_life": 7.30752944946289e-05, "__label__software": 0.0214385986328125, "__label__software_dev": 0.90087890625, "__label__sports_fitness": 0.00028896331787109375, "__label__transportation": 0.0005331039428710938, "__label__travel": 0.00018846988677978516}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27308, 0.01281]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27308, 0.29359]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27308, 0.90988]], "google_gemma-3-12b-it_contains_pii": [[0, 2303, false], [2303, 5576, null], [5576, 8484, null], [8484, 9980, null], [9980, 13250, null], [13250, 16256, null], [16256, 19372, null], [19372, 22291, null], [22291, 24475, null], [24475, 27308, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2303, true], [2303, 5576, null], [5576, 8484, null], [8484, 9980, null], [9980, 13250, null], [13250, 16256, null], [16256, 19372, null], [19372, 22291, null], [22291, 24475, null], [24475, 27308, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27308, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27308, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27308, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27308, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27308, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27308, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27308, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27308, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27308, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27308, null]], "pdf_page_numbers": [[0, 2303, 1], [2303, 5576, 2], [5576, 8484, 3], [8484, 9980, 4], [9980, 13250, 5], [13250, 16256, 6], [16256, 19372, 7], [19372, 22291, 8], [22291, 24475, 9], [24475, 27308, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27308, 0.0]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
a7606644ad057c80cc2281a04942584e1bb34d46
CSE120 Principles of Operating Systems Prof Yuanyuan (YY) Zhou Synchronization: Semaphore Synchronization Needs - Two synchronization needs - **Mutual exclusion** - Whenever multiple threads access a shared data, you need to worry about “protection” for mutual exclusion - **Coordination** (one wait for the other to finish something, e.g. produce the data, free the buffer, etc) typedef struct __lock_t { int flag; int guard; queue_t *q; } lock_t; void lock_init(lock_t *m) { m->flag = 0; m->guard = 0; queue_init(m->q); } void lock(lock_t *m) { while (TestAndSet(&m->guard, 1) == 1) { // acquire guard lock by spinning if (m->flag == 0) { m->flag = 1; // lock is acquired m->guard = 0; } else { queue_add(m->q, gettid()); m->guard = 0; park(); } } } void unlock(lock_t *m) { while (TestAndSet(&m->guard, 1) == 1) { // acquire guard lock by spinning if (queue_empty(m->q)) m->flag = 0; // let go of lock; no one wants it else unpark(queue_remove(m->q)); // hold lock (for next thread!) m->guard = 0; } } Higher-Level Synchronization - We looked at using locks to provide mutual exclusion. - Those locks work, but they have some drawbacks when critical sections are long: - Spinlocks – inefficient. - Instead, we want synchronization mechanisms that: - Block waiters. - Leave interrupts enabled inside the critical section. - Look at two common high-level mechanisms: - Semaphores: binary and counting. - Monitors and condition variables. - Use them to solve common synchronization problems. Semaphores - Semaphores are an **abstract data type** that provide mutual exclusion to critical sections - Semaphores can also be used as atomic counters - More later - Semaphores are **integers** that support two operations: - **P(semaphore)**: decrement, block the calling thread until semaphore is open, i.e. the integer is greater than 0 - Also **Wait()**, or down() - **V(semaphore)**: increment, allow another thread to enter. If there is a thread is blocked, wake up this thread. - Also **Signal()**, or up() - That's it! No other operations – not even just reading its value – exist - Semaphore safety property: the semaphore value is always greater than or equal to 0 Blocking in Semaphores - Associated with each semaphore is a queue of waiting processes. - When \( P() \) is called by a thread: - If semaphore is open (\( >0 \)), thread continues - If semaphore is closed (\( ==0 \)), thread blocks on queue - Then \( V() \) opens the semaphore: - If a thread is waiting on the queue, the thread is unblocked - If no threads are waiting on the queue, the signal is remembered for the next thread by increment the counter - In other words, \( V() \) has “history” (c.f., condition vars later) - This “history” is a counter Semaphore Functionality (NOT implementation) P(Semaphore s) If (s==0) blocked in a queue; /* wait until s>0 */ s=s-1; } V(Semaphore s) s=s+1; if (someone is waiting in a queue) wakeup one from the queue; } Init(Semaphore s, int v) s=v; } Semaphore Types - Semaphores come in two types - **Binary** semaphore (value can be only 1 or 0, some referred it as mutex) - Represents single access to a resource - Guarantees mutual exclusion to a critical section - similar to locks with a subtle difference: the former can “remember” when you do $V(sem)$ when $sem=0$ vs do unlock(l) if no thread has the lock - **Counting** semaphore - Represents a resource with many units available, or a resource that allows certain kinds of unsynchronized concurrent access (e.g., reading) - Multiple threads can pass the semaphore - Number of threads determined by the semaphore “count” - You can use one type to implement the other Counter Semaphore 2 lanes B. Komazec, 2012 Semaphore Animation Video - [https://www.youtube.com/watch?v=PQ5aK5wLCQE](https://www.youtube.com/watch?v=PQ5aK5wLCQE) - In this video, wait() is P() and signal is V() Using Semaphores - Mutex is similar to our locks, but semantics are different ```c struct Semaphore { int value; Queue q; } S; withdraw (account, amount) { P(S); balance = get_balance(account); balance = balance – amount; put_balance(account, balance); V(S); return balance; } ``` Threads block P(S); balance = get_balance(account); balance = balance – amount; V(S); critical section put_balance(account, balance); V(S); It is undefined which thread runs after a signal ... V(S); ... V(S); CSE 120 – Synchronization Semaphore Exercise - Using semaphores to allow robots to attend an exam: - Only 10 seats, but 100 robots - If a robot comes to the classroom, if there is an available seat, it takes the seat; otherwise, wait outside unless another robot leaves the room; - Every robot sits in the seat for only 30min to finish the exam, and then leaves the room - Implement the code (steps) for every robot to follow Classic Synchronization problems - We’ve looked at a simple example for using synchronization - Mutual exclusion while accessing a bank account - Now we’re going to use semaphores to look at more interesting examples - Readers/Writers - Bounded Buffers - Santa clause problem (youtube video) Readers/Writers Problem - Readers/Writers Problem: - An object is shared among several threads - Some threads only read the object, others only write it - We can allow multiple readers but only one writer - How can we use semaphores to control access to the object to implement this protocol? - Use three variables - int readcount – number of threads reading object - Semaphore mutex – control access to readcount - Semaphore w_or_r – exclusive writing or reading Semaphore w_or_r=1; Reader{ P(w_or_r); // lock out writers read; V(w_or_r); // up for grabs } writer { P(w_or_r); // lock out readers Write; V(w_or_r); // up for grabs } Does it work? Why? Semaphore w_or_r=1; int readcount; //record #readers Reader{ readcount++; if (readcount == 1){ P(w_or_r); // lock out writers } read; readcount--; if (readcount == 0){ V(w_or_r); // up for grabs } } writer { P(w_or_r); // lock out readers Write; V(w_or_r); // up for grabs } Readers/Writers Real Solution - Use three variables - int readcount – number of threads reading object - Semaphore mutex – Guard access to readcount - Semaphore w_or_r – exclusive writing or reading Readers/Writers // number of readers readcount = 0; // mutual exclusion to readcount Semaphore mutex = 1; // exclusive writer or reader Semaphore w_or_r = 1; writer { P(w_or_r); // lock out readers Write; V(w_or_r); // up for grabs } reader { P(mutex); // lock readcount readcount ++; // one more reader if (readcount == 1) P(w_or_r); // synch w/ writers V(mutex); // unlock readcount Read; P(mutex); // lock readcount readcount --; // one less reader if (readcount == 0) V(w_or_r); // up for grabs V(mutex); // unlock readcount } I will give you 2-3 minutes to discuss it with someone next to you w_or_r provides mutex between readers and writers, and also between multiple writers. Why do readers use mutex (binary semaphore)? What if V() is above “if (readcount == 1)”? Why do we need “if(readcount==1)”? Why do we need “if(readcount==0)”? But it still has a problem // number of readers readcount = 0; // mutual exclusion to readcount Semaphore mutex = 1; // exclusive writer or reader Semaphore w_or_r = 1; writer { P(w_or_r); // lock out readers Write; V(w_or_r); // up for grabs } reader { P(mutex); // lock readcount readcount ++; // one more reader if (readcount == 1) P(w_or_r); // synch w/ writers V(mutex); // unlock readcount Read; P(mutex); // lock readcount readcount --; // one less reader if (readcount == 0) V(w_or_r); // up for grabs V(mutex); // unlock readcount } Problem: Starvation - What if a writer is waiting, but readers keep coming, the writer is starved Bounded Buffer - **Problem:** There is a set of resource buffers shared by producer and consumer threads - **Producer** inserts resources into the buffer set - Output, disk blocks, memory pages, processes, etc. - **Consumer** removes resources from the buffer set - Whatever is generated by the producer - **Producer and consumer execute at different rates** - No serialization of one behind the other - Tasks are independent (easier to think about) - The buffer set allows each to run without explicit handoff Bounded Buffer (2) - Use three semaphores: - **empty** – count of empty buffers - Counting semaphore - empty = N – (np-nc) - **full** – count of full buffers - Counting semaphore - np - nc = full - **mutex** – mutual exclusion to shared set of buffers - Binary semaphore Bounded Buffer (3) Semaphore mutex = 1; // mutual exclusion to shared set of buffers Semaphore empty = N; // count of empty buffers (all empty to start) Semaphore full = 0; // count of full buffers (none full to start) producer { while (1) { Produce new resource; P(empty); // wait for empty buffer P(mutex); // lock buffer list Add resource to an empty buffer; V(mutex); // unlock buffer list V(full); // note a full buffer } } c consumer { while (1) { P(full); // wait for a full buffer P(mutex); // lock buffer list Remove resource from a full buffer; V(mutex); // unlock buffer list V(empty); // note an empty buffer Consume resource; } } Bounded Buffer (4) Consumer decrements FULL and Blocks when buffer has no item Since the semaphore FULL is at 0 Producuer decrements EMPTY and blocks when buffer is full since the semaphore is at 0 Bounded Buffer (5) - Why need the mutex at all? - Where are the critical sections? - What happens if operations on mutex and full/empty are switched around? - The pattern of P/V on full/empty is a common construct often called an interlock - Why V(full) and V(empty)? - Producer-Consumer and Bounded Buffer are classic examples of synchronization problems Youtube Video for Bounded Buffer - https://www.youtube.com/watch?v=GvfjiA9jkTs Possible Deadlocks with Semaphores Example: P0 share two mutex semaphores S and Q S:= 1; Q:=1; P(S); P(Q); P(Q); P(S); ….. …… V(S); V(Q); V(Q); V(S); Be Careful When Using Semaphores // Violation of Mutual Exclusion V(mutex); critical section P(mutex); // Deadlock Situation P(mutex); critical section P(mutex); // Violation of Mutual Exclusion critical section V(mutex); Semaphore Summary - Semaphores can be used to solve any of the traditional synchronization problems - However, they have some drawbacks - They are essentially shared global variables - Can potentially be accessed anywhere in the program - No connection between the semaphore and the data being controlled by the semaphore - Used both for critical sections (mutual exclusion) and coordination (scheduling) - Note that I had to use comments in the code to distinguish - No control or guarantee of proper usage - Sometimes hard to use and prone to bugs Monitors • A monitor is a programming language construct that controls access to shared data - Synchronization code added by compiler, enforced at runtime - Why is this an advantage? • A monitor is a module that encapsulates - Shared data structures - Procedures that operate on the shared data structures - Synchronization between concurrent threads that invoke the procedures • A monitor protects its data from unsynchronized access • It guarantees that threads accessing its data through its procedures interact only in legitimate ways Monitor Semantics - A monitor guarantees mutual exclusion - Only one thread can execute any monitor procedure at any time (the thread is “in the monitor”) - If a second thread invokes a monitor procedure when a first thread is already executing one, it blocks - So the monitor has to have a wait queue... - If a thread within a monitor blocks, another one can enter - What are the implications in terms of parallelism in monitor? Account Example Monitor account { double balance; double withdraw(amount) { balance = balance – amount; return balance; } } - Hey, that was easy - But what if a thread wants to wait inside the monitor? When first thread exits, another can enter. Which one is undefined. Threads block waiting to get into monitor withdraw(amount) balance = balance – amount; withdraw(amount) withdraw(amount) return balance (and exit) balance = balance – amount return balance; balance = balance – amount return balance; A condition variable is associated with a condition needed for a thread to make progress Monitor M { ... monitored variables Condition c; void enter_mon (...) { if (extra property not true) wait(c); \ waits outside of the monitor's mutex do what you have to do if (extra property true) signal(c); \ brings in one thread waiting on condition } } Condition Variables - Condition variables support three operations: - **Wait** – release monitor lock, wait for C/V to be signaled - So condition variables have wait queues, too - **Signal** – wakeup one waiting thread - **Broadcast** – wakeup all waiting threads - Condition variables *are not* boolean objects - “if (condition_variable) then” … does not make sense - “if (num_resources == 0) then wait(resources_available)” does - An example will make this more clear Condition Variables - Condition variables are NOT conditions - So don’t EVER do: - if (conditionaVariable){ ... } - Instead, it is a way for one thread to wait (if some resource is not available), and some other thread to wake it up once the resource becomes available - Sleep - Wake (also called as “signal”) - Wakeall (or “signalAll”) 10/10/18 Condition Variable & Lock - Condition variable doesn’t replace lock, instead it compliments lock **Sleep**(condition, lock) or **Wait**(condition, lock) - First release the lock, put the thread into the queue of the condition, if waking up, re-acquiring the lock - Once sleep returns, it is awaken by some other thread, and it also holds the lock **Wake**(condition) or **Signal**(condition): Wake up a thread waiting on the condition (queue) - Some systems use a different name such as “**Signal(condition)**” or “**Notify(condition)**” **Wakeall**(condition) or **Broadcast**(condition) - Wake all the thread waiting on the condition (queue) - Some systems may use a different name such as “**SignalAll(condition)**, or **NotifyAll(condition)**" Monitor bounded_buffer { Resource buffer[N]; // Variables for indexing buffer // monitor invariant involves these vars Condition not_full; // space in buffer Condition not_empty; // value in buffer void put_resource (Resource R) { if (buffer array is full) wait(not_full); Add R to buffer array; signal(not_empty); } Resource get_resource() { if (buffer array is empty) wait(not_empty); Get resource R from buffer array; signal(not_full); return R; } } // end monitor - What happens if no threads are waiting when signal is called? - Signal is lost Monitor bounded_buffer { Condition not_full; …other variables… Condition not_empty; void put_resource () { …wait(not_full)… …signal(not_empty)… } Resource get_resource () { … } } Condition Vars != Semaphores - Monitor with Condition variables != semaphores - But they can implement each other - Access to the monitor is controlled by a lock - `wait()` blocks the calling thread, and gives up the lock - To call `wait`, the thread has to be in the monitor (hence has lock) - `Semaphore::P` just blocks the thread on the queue - `signal()` causes a waiting thread to wake up - If there is no waiting thread, the signal is lost - `Semaphore::V()` increases the semaphore count, allowing future entry even if no thread is waiting - Condition variables have no history Signal Semantics - There are two flavors of monitors that differ in the scheduling semantics of signal(): - **Hoare** monitors (original) - signal() immediately switches from the caller to a waiting thread - The condition that the waiter was anticipating is guaranteed to hold when waiter executes - Signaler must restore monitor invariants before signaling - **Mesa** monitors (Mesa, Java) - signal() places a waiter on the ready queue, but signaler continues inside monitor - Condition is not necessarily true when waiter runs again - Returning from wait() is only a hint that something changed - Must recheck conditional case Hoare vs. Mesa Monitors - **Hoare** ``` if (empty) wait(condition); ``` - **Mesa** ``` while (empty) wait(condition); ``` - **Tradeoffs** - Mesa monitors easier to use, more efficient - Fewer context switches, easy to support broadcast - Hoare monitors leave less to chance - Easier to reason about the program Monitor Readers and Writers - Write with just wait() will be safe, maybe not “live” - why? - Starvation Monitor RW { int nr = 0, nw = 0; Condition canRead, canWrite; void StartRead () { while (nw != 0) do wait(canRead); nr++; } void EndRead () { nr--; if (nr==0) signal(canWrite) } void StartWrite { while (nr != 0 || nw != 0) do wait(canWrite); nw++; } void EndWrite () { nw--; signal(canWrite); signal(canRead); } } // end monitor Monitor Readers and Writers - Is there any priority between readers and writers? - What if you wanted to ensure that a waiting writer would have priority over new readers? Summary - **Semaphores** - P()/V() implement blocking mutual exclusion - Also used as atomic counters (counting semaphores) - Can be inconvenient to use - **Monitors** - Synchronizes execution within procedures that manipulate encapsulated data shared among procedures - Only one thread can execute within a monitor at a time - Relies upon high-level language support - **Condition variables** - Used by threads as a synchronization point to wait for events - Inside monitors, or outside with locks Dining Philosophers: an intellectual game - Philosophers eat/think - Eating needs 2 forks - Pick one fork at a time - Possible deadlock? - How to prevent deadlock? Does it solve the Dining Philosophers Problem? ```c #define N 5 /* number of philosophers */ void philosopher(int i) /* i: philosopher number, from 0 to 4 */ { while (TRUE) { think(); /* philosopher is thinking */ take_fork(i); /* take left fork */ take_fork((i+1) % N); /* take right fork; % is modulo operator */ eat(); /* yum-yum, spaghetti */ put_fork(i); /* put left fork back on the table */ put_fork((i+1) % N); /* put right fork back on the table */ } } ``` Dining Philosophers Solution #define N 5 /* number of philosophers */ #define LEFT (i+N-1)%N /* number of i’s left neighbor */ #define RIGHT (i+1)%N /* number of i’s right neighbor */ #define THINKING 0 /* philosopher is thinking */ #define HUNGRY 1 /* philosopher is trying to get forks */ #define EATING 2 /* philosopher is eating */ typedef int semaphore; int state[N]; semaphore mutex = 1; semaphore s[N]; void philosopher(int i) { /* i: philosopher number, from 0 to N-1 */ while (TRUE) { /* repeat forever */ think(); /* philosopher is thinking */ take_forks(i); /* acquire two forks or block */ eat(); /* yum-yum, spaghetti */ put_forks(i); /* put both forks back on table */ } } Dining Philosophers Solution void take_forks(int i) /* i: philosopher number, from 0 to N–1 */ { down(&mutex); /* enter critical region */ state[i] = HUNGRY; /* record fact that philosopher i is hungry */ test(i); /* try to acquire 2 forks */ up(&mutex); /* exit critical region */ down(&s[i]); /* block if forks were not acquired */ } void put_forks(i) /* i: philosopher number, from 0 to N–1 */ { down(&mutex); /* enter critical region */ state[i] = THINKING; /* philosopher has finished eating */ test(LEFT); /* see if left neighbor can now eat */ test(RIGHT); /* see if right neighbor can now eat */ up(&mutex); /* exit critical region */ } void test(i) /* i: philosopher number, from 0 to N–1 */ { if (state[i] == HUNGRY && state[LEFT] != EATING && state[RIGHT] != EATING) { state[i] = EATING; up(&s[i]); } } The Sleeping Barber Problem - N customer Chair - One barber can cut one customer’s hair at any time - No customer, goes to sleep The Sleeping Barber Solution (1) ```c #define CHAIRS 5 typedef int semaphore; semaphore customers = 0; semaphore barbers = 0; semaphore mutex = 1; int waiting = 0; /* # chairs for waiting customers */ /* use your imagination */ /* # of customers waiting for service */ /* # of barbers waiting for customers */ /* for mutual exclusion */ /* customers are waiting (not being cut) */ ``` The Sleeping Barber Solution (2) ```c void barber(void) { while (TRUE) { down(&customers); /* go to sleep if # of customers is 0 */ down(&mutex); /* acquire access to 'waiting' */ waiting = waiting - 1; /* decrement count of waiting customers */ up(&barbers); /* one barber is now ready to cut hair */ up(&mutex); /* release 'waiting' */ cut_hair(); /* cut hair (outside critical region) */ } } ``` The Sleeping Barber Solution (3) ```c void customer(void) { down(&mutex); /* enter critical region */ if (waiting < CHAIRS) { /* if there are no free chairs, leave */ waiting = waiting + 1; /* increment count of waiting customers */ up(&customers); /* wake up barber if necessary */ up(&mutex); /* release access to 'waiting' */ down(&barbers); /* go to sleep if # of free barbers is 0 */ get_haircut(); /* be seated and be serviced */ } else { up(&mutex); /* shop is full; do not wait */ } } ``` Solution to sleeping barber problem. Santa Clause Problem - http://www.youtube.com/watch?v=pqO6tKN2lc4
{"Source-Url": "http://cseweb.ucsd.edu/classes/fa18/cse120-a/lectures/Lec6sem.pdf", "len_cl100k_base": 5700, "olmocr-version": "0.1.53", "pdf-total-pages": 54, "total-fallback-pages": 0, "total-input-tokens": 81000, "total-output-tokens": 8015, "length": "2e12", "weborganizer": {"__label__adult": 0.00034427642822265625, "__label__art_design": 0.0002715587615966797, "__label__crime_law": 0.0002923011779785156, "__label__education_jobs": 0.001445770263671875, "__label__entertainment": 7.015466690063477e-05, "__label__fashion_beauty": 0.00011724233627319336, "__label__finance_business": 0.00011396408081054688, "__label__food_dining": 0.00035834312438964844, "__label__games": 0.0007534027099609375, "__label__hardware": 0.0013952255249023438, "__label__health": 0.00039267539978027344, "__label__history": 0.00020742416381835935, "__label__home_hobbies": 0.00010919570922851562, "__label__industrial": 0.0004181861877441406, "__label__literature": 0.000263214111328125, "__label__politics": 0.00022304058074951172, "__label__religion": 0.0004930496215820312, "__label__science_tech": 0.0161285400390625, "__label__social_life": 0.00010907649993896484, "__label__software": 0.0046844482421875, "__label__software_dev": 0.970703125, "__label__sports_fitness": 0.0002696514129638672, "__label__transportation": 0.00051116943359375, "__label__travel": 0.00016057491302490234}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22806, 0.00904]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22806, 0.56225]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22806, 0.783]], "google_gemma-3-12b-it_contains_pii": [[0, 91, false], [91, 390, null], [390, 1112, null], [1112, 1610, null], [1610, 2307, null], [2307, 2872, null], [2872, 3140, null], [3140, 3831, null], [3831, 3876, null], [3876, 4046, null], [4046, 4614, null], [4614, 5013, null], [5013, 5314, null], [5314, 5847, null], [5847, 6063, null], [6063, 6398, null], [6398, 6604, null], [6604, 7272, null], [7272, 7521, null], [7521, 8132, null], [8132, 8231, null], [8231, 8762, null], [8762, 9060, null], [9060, 9833, null], [9833, 10033, null], [10033, 10395, null], [10395, 10475, null], [10475, 10729, null], [10729, 10954, null], [10954, 11521, null], [11521, 12074, null], [12074, 12516, null], [12516, 13056, null], [13056, 13445, null], [13445, 13933, null], [13933, 14312, null], [14312, 15074, null], [15074, 15744, null], [15744, 15975, null], [15975, 16587, null], [16587, 17253, null], [17253, 17605, null], [17605, 18158, null], [18158, 18331, null], [18331, 18853, null], [18853, 19018, null], [19018, 19542, null], [19542, 20285, null], [20285, 21166, null], [21166, 21296, null], [21296, 21684, null], [21684, 22140, null], [22140, 22740, null], [22740, 22806, null]], "google_gemma-3-12b-it_is_public_document": [[0, 91, true], [91, 390, null], [390, 1112, null], [1112, 1610, null], [1610, 2307, null], [2307, 2872, null], [2872, 3140, null], [3140, 3831, null], [3831, 3876, null], [3876, 4046, null], [4046, 4614, null], [4614, 5013, null], [5013, 5314, null], [5314, 5847, null], [5847, 6063, null], [6063, 6398, null], [6398, 6604, null], [6604, 7272, null], [7272, 7521, null], [7521, 8132, null], [8132, 8231, null], [8231, 8762, null], [8762, 9060, null], [9060, 9833, null], [9833, 10033, null], [10033, 10395, null], [10395, 10475, null], [10475, 10729, null], [10729, 10954, null], [10954, 11521, null], [11521, 12074, null], [12074, 12516, null], [12516, 13056, null], [13056, 13445, null], [13445, 13933, null], [13933, 14312, null], [14312, 15074, null], [15074, 15744, null], [15744, 15975, null], [15975, 16587, null], [16587, 17253, null], [17253, 17605, null], [17605, 18158, null], [18158, 18331, null], [18331, 18853, null], [18853, 19018, null], [19018, 19542, null], [19542, 20285, null], [20285, 21166, null], [21166, 21296, null], [21296, 21684, null], [21684, 22140, null], [22140, 22740, null], [22740, 22806, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22806, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22806, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22806, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22806, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 22806, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22806, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22806, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22806, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22806, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22806, null]], "pdf_page_numbers": [[0, 91, 1], [91, 390, 2], [390, 1112, 3], [1112, 1610, 4], [1610, 2307, 5], [2307, 2872, 6], [2872, 3140, 7], [3140, 3831, 8], [3831, 3876, 9], [3876, 4046, 10], [4046, 4614, 11], [4614, 5013, 12], [5013, 5314, 13], [5314, 5847, 14], [5847, 6063, 15], [6063, 6398, 16], [6398, 6604, 17], [6604, 7272, 18], [7272, 7521, 19], [7521, 8132, 20], [8132, 8231, 21], [8231, 8762, 22], [8762, 9060, 23], [9060, 9833, 24], [9833, 10033, 25], [10033, 10395, 26], [10395, 10475, 27], [10475, 10729, 28], [10729, 10954, 29], [10954, 11521, 30], [11521, 12074, 31], [12074, 12516, 32], [12516, 13056, 33], [13056, 13445, 34], [13445, 13933, 35], [13933, 14312, 36], [14312, 15074, 37], [15074, 15744, 38], [15744, 15975, 39], [15975, 16587, 40], [16587, 17253, 41], [17253, 17605, 42], [17605, 18158, 43], [18158, 18331, 44], [18331, 18853, 45], [18853, 19018, 46], [19018, 19542, 47], [19542, 20285, 48], [20285, 21166, 49], [21166, 21296, 50], [21296, 21684, 51], [21684, 22140, 52], [22140, 22740, 53], [22740, 22806, 54]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22806, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
290ee0d893d9676dd0723cc0064526461eb9b69a
LIBRARY BINDING © Giovanni De Micheli Stanford University Outline - Modeling and problem analysis. - Rule-based systems for library binding. - Algorithms for library binding: - Structural covering/matching. - Boolean covering/matching. - Concurrent optimization and binding. Library binding - Given an unbound logic network and a set of library cells: - Transform into an interconnection of instances of library cells. - Optimize area, (under delay constraints.) - Optimize delay, (under area constraints.) - Optimize power, (under delay constraints.) - Called also technology mapping: - Method used for re-designing circuits in different technologies. Library models - Combinational elements: - Single-output functions: * e.g. AND, OR, AOI. - Compound cells: e.g. adders, encoders. - Sequential elements: - Registers, counters. - Miscellaneous: - Schmitt triggers. Major approaches - Rule-based systems: - Mimic designer activity. - Handle all types of cells. - Heuristic algorithms: - Restricted to single-output combinational cells. - Most tools use a combination of both. Rule-based library binding - Binding by stepwise transformations. - Data-base: - Set of patterns associated with best implementation. - Rules: - Select subnetwork to be mapped. - Handle high-fanout problems, buffering, etc. Example \[\text{Diagram 1} \Rightarrow \text{Diagram 2}\] \[\text{Diagram 3} \Rightarrow \text{Diagram 4}\] Strategies - Search for a sequence of transformations. - Search space: - *Breadth* (options at each step). - *Depth* (look-ahead). - *Meta-rules* determine dynamically breadth and depth. Rule-based library binding • Advantages: – Applicable to all kinds of libraries. • Disadvantages: – Large rule data-base: * Completeness issue. * Formal properties of bound network. – Data-base updates. Algorithms for library binding - Mainly for single-output combinational cells. - Fast and efficient: - Quality comparable to rule-based systems. - Library description/update is simple: - Each cell modeled by its function or equivalent pattern. Problem analysis - Matching: - A cell matches a sub-network if their terminal behavior is the same. - Input-variable assignment problem. - Covering: - A cover of an unbound network is a partition into subnetworks which can be replaced by library cells. Assumptions - Network granularity is fine. - Decomposition into base functions. * 2-input $AND, OR, NAND, NOR$. - Trivial binding: - Replacement of each vertex by base cell. Example (a) $z = a + w$ $w = x + y$ $y = d * u$ $x = b + c$ $u = ef$ (b) c (c) c (d) c Example <table> <thead> <tr> <th>Library</th> <th>Cost</th> </tr> </thead> <tbody> <tr> <td>AND2</td> <td>4</td> </tr> <tr> <td>OR2</td> <td>4</td> </tr> <tr> <td>OA21</td> <td>5</td> </tr> </tbody> </table> \[ x = b + c \] \[ y = a \times x \] \[ z = x \times d \] (a) (b) (c) m1: \{v1, OR2\} m2: \{v2, AND2\} m3: \{v3, AND2\} m4: \{v1, v2, OA21\} m5: \{v1, v3, OA21\} (d) (e) (f) Example - Vertex covering: - Covering $v_1$: $(m_1 + m_4 + m_5)$. - Covering $v_2$: $(m_2 + m_4)$. - Covering $v_3$: $(m_3 + m_5)$. - Input compatibility: - Match $m_2$ requires $m_1$: * $(m'_2 + m_1)$. - Match $m_3$ requires $m_1$: * $(m'_3 + m_1)$. - Overall $binate$ clause: - $(m_1 + m_4 + m_5)(m_2 + m_4)(m_3 + m_5)(m'_2 + m_1)(m'_3 + m_1) = 1$ Heuristic algorithms - Decomposition: - Cast network and library in standard form. - Decompose into base functions. - Example: NAND2 and INV. - Partitioning: - Break network into cones. - Reduce to many multi-input single-output subnetworks. - Covering: - Cover each subnetwork by library cells. Decomposition Partitioning Covering Heuristic algorithms - Structural approach: - Model functions by *patterns*. * Example: trees, dags. - Rely on *pattern matching* techniques. - Boolean approach: - Use Boolean models. - Solve *tautology* problem. - More powerful. Example Boolean versus structural matching - $f = xy + x'y' + y'z$ - $g = xy + x'y' + xz$ - Function equality is a tautology: - Boolean match. - Patterns may be different: - Structural match may not be found. Example Boolean versus structural matching \[ f = xy + x'y' + y'z \] \[ g = xy + x'y' + xz \] *Patterns do not match.* Structural matching and covering - Expression patterns: - Represented by dags. - Identify pattern dags in network: - Sub-graph isomorphism. - Simplification: - Use tree patterns. Example Tree-based matching - Network: - Partitioned and decomposed: * NOR2 (or NAND2) + INV. * Generic base functions. - Subject tree. - Library: - Represented by trees. - Possibly more than one tree per cell. - Pattern recognition: - Simple binary tree match. - Aho-Corasick automaton. Simple library INV NAND2 AND2 NOR2 OR2 AOI21 AOI22 (a) (b) (c) (d) N1v N2v t1.1 t2.1 t2.2 t3.1 t3.2 t4.1 t4.2 t5.1 t5.2 I1v I1N1v I1N2v I1N1v I1N2v I1N1v I1N2v I1N1v I1N2v I1N1v I1N2v I1N1v I1N2v I1N1v I1N2v I1N1v I1N2v I1N1v I1N2v I1N1v I1N2v I1N1v I1N2v I1N1v I1N2v I1N1v I1N2v I1N1v I1N2v 16A.1 16A.2 16A.3 16B.1 16B.2 16B.3 I1v I1N1v I1N2v I1N1v I1N2v I1N1v I1N2v I1N1v I1N2v I1N1v I1N2v I1N1v I1N2v I1N1v I1N2v I1N1v I1N2v I1N1v I1N2v I1N1v I1N2v I1N1v I1N2v I1N1v I1N2v I1N1v I1N2v I1N1v I1N2v 17.1 17.2 17.3 17.4 Tree covering - Dynamic programming: - Visit subject tree bottom-up. - At each vertex: - Attempt to match: * Locally rooted subtree. * All library cells. - Optimum solution, for the subtree. Example **SUBJECT TREE** - r - s - t - u **PATTERN TREES** - t1 (cost = 2, INV) - t2 (cost = 3, NAND) - t3 (cost = 4, AND) - t4 (cost = 5, OR) Example Match of s: t1 cost = 2 Match of u: t2 cost = 3 Match of t: t1 cost = 2+3=5 Match of r: t2 cost = 3+2+4 =9 Match of r: t4 cost = 5+3=8 Example - Minimum-area cover. - Area costs: - INV:2; NAND2:3; AND2:4; AOI21:6. - Best choice: - AOI21 fed by a NAND2 gate. ### Example <table> <thead> <tr> <th>Network</th> <th>Subject graph</th> <th>Vertex</th> <th>Match</th> <th>Gate</th> <th>Cost</th> </tr> </thead> <tbody> <tr> <td></td> <td></td> <td>x</td> <td>t2</td> <td>NAND2(b,c)</td> <td>3</td> </tr> <tr> <td></td> <td></td> <td>y</td> <td>t1</td> <td>INV(a)</td> <td>2</td> </tr> <tr> <td></td> <td></td> <td>z</td> <td>t2</td> <td>NAND2(x,d)</td> <td>2*3=6</td> </tr> <tr> <td></td> <td></td> <td>w</td> <td>t2</td> <td>NAND2(y,z)</td> <td>3*3+2=11</td> </tr> <tr> <td></td> <td></td> <td>o</td> <td>t1</td> <td>INV(w)</td> <td>3<em>3+2</em>2=13</td> </tr> <tr> <td></td> <td></td> <td>t3</td> <td></td> <td>AND2(y,z)</td> <td>2*3+4+2=12</td> </tr> <tr> <td></td> <td></td> <td>t6B</td> <td></td> <td>AOI21(x,d,a)</td> <td>3+6=9</td> </tr> </tbody> </table> Minimum delay cover - Dynamic programming approach. - Cost related to gate delay. - Delay modeling: - Constant gate delay. * Straightforward. - Load-dependent delay: * Load fanout unknown. * Binning techniques. Minimum delay cover constant delays - The cell pattern tree and the rooted subtree are isomorphic. - The vertex is labeled with the cell delay. - The cell tree is isomorphic to a subtree with leaves $L$. - The vertex is labeled with the cell cost plus the maximum of the labels of $L$. Example - Inputs data-ready times are 0 except for \( t_d = 6 \). - Constant delays: - \( \text{INV}:2; \text{NAND2}:4; \text{AND2}:5; \text{AOI21}:10 \). - Compute \textit{data-ready} times bottom-up: - \( t_x = 4, t_y = 2; t_z = 10t_w = 14 \). - Best choice: - \( \text{AND2}, \) two NAND2 and an INV gate. ### Example <table> <thead> <tr> <th>Network</th> <th>Subject graph</th> <th>Vertex</th> <th>Match</th> <th>Gate</th> <th>Cost</th> </tr> </thead> <tbody> <tr> <td>o</td> <td></td> <td>x</td> <td>t2</td> <td>NAND2(b,c)</td> <td>4</td> </tr> <tr> <td>w</td> <td></td> <td>y</td> <td>t1</td> <td>INV(a)</td> <td>2</td> </tr> <tr> <td>z</td> <td></td> <td>z</td> <td>t2</td> <td>NAND2(x,d)</td> <td>6 + 4 = 10</td> </tr> <tr> <td>y</td> <td></td> <td>w</td> <td>t2</td> <td>NAND2(y,z)</td> <td>10 + 4 = 14</td> </tr> <tr> <td>x</td> <td></td> <td>o</td> <td>t1</td> <td>INV(w)</td> <td>14 + 2 = 16</td> </tr> <tr> <td>d</td> <td></td> <td>t3</td> <td>AND2(y,z)</td> <td>10 + 5 = 15</td> <td></td> </tr> <tr> <td>t6B</td> <td></td> <td></td> <td>AOI21(x,d,a)</td> <td>10 + 6 = 16</td> <td></td> </tr> </tbody> </table> Minimum delay cover load-dependent delays - Model: - Assume a finite set of load values. - Dynamic programming approach: - Compute an array of solutions for each possible load. - For each input to a matching cell the best match for any load is selected. - *Optimum* solution, when all possible loads are considered. Example - Inputs data-ready times are 0 except for \( t_d = 6 \). - Load-dependent delays: - INV:1+1; NAND2:3+1; AND2:4+1; AOI21:9+1. - Loads: - INV:1; NAND2:1; AND2:1; AOI21:1. - Same solution as before. Example - Inputs data-ready times are 0 except for $t_d = 6$. - Load-dependent delays: - $\text{INV:1} + 1$; $\text{NAND2:3} + 1$; $\text{AND2:4} + 1$; $\text{AOI21:9} + 1$; $\text{SINV:1} + 0.5I$. - Loads: - $\text{INV:1}$; $\text{NAND2:1}$; $\text{AND2:1}$; $\text{AOI21:1}$; $\text{SINV:2}$. - Assume output load is 1: - Same solution as before. - Assume output load is 5: - Solution uses SINV cell. **Example** <table> <thead> <tr> <th>Network</th> <th>Subject graph</th> <th>Vertex</th> <th>Match</th> <th>Gate</th> <th>Cost</th> </tr> </thead> <tbody> <tr> <td></td> <td></td> <td>x</td> <td>t2</td> <td>NAND2(b,c)</td> <td>4</td> </tr> <tr> <td></td> <td></td> <td>y</td> <td>t1</td> <td>INV(a)</td> <td>2</td> </tr> <tr> <td></td> <td></td> <td>z</td> <td>t2</td> <td>NAND2(x,d)</td> <td>10</td> </tr> <tr> <td></td> <td></td> <td>w</td> <td></td> <td>NAND2(y,z)</td> <td>14</td> </tr> <tr> <td></td> <td></td> <td>o</td> <td>t1</td> <td>INV(w)</td> <td>15</td> </tr> <tr> <td></td> <td></td> <td>t3</td> <td></td> <td>AND2(y,z)</td> <td>18.5</td> </tr> <tr> <td></td> <td></td> <td>t6B</td> <td></td> <td>AOI21(x,d,a)</td> <td></td> </tr> <tr> <td></td> <td></td> <td></td> <td></td> <td>SINV(w)</td> <td></td> </tr> </tbody> </table> <table> <thead> <tr> <th>Cost</th> <th>Load=1</th> <th>Load=2</th> <th>Load=5</th> </tr> </thead> <tbody> <tr> <td>x</td> <td>4</td> <td>5</td> <td>8</td> </tr> <tr> <td>y</td> <td>2</td> <td>3</td> <td>6</td> </tr> <tr> <td>z</td> <td>10</td> <td>11</td> <td>14</td> </tr> <tr> <td>w</td> <td>14</td> <td>15</td> <td>18</td> </tr> <tr> <td>o</td> <td></td> <td></td> <td>20</td> </tr> <tr> <td>t3</td> <td></td> <td></td> <td>19</td> </tr> <tr> <td>t6B</td> <td></td> <td></td> <td>20</td> </tr> <tr> <td></td> <td></td> <td></td> <td><strong>18.5</strong></td> </tr> </tbody> </table> Library binding and polarity assignment - Search for lower cost solution by not constraining the signal polarities. - Most circuit allow us to choose the input/output signal polarities. - Approaches: - Structural covering. - Boolean covering. Structural covering and polarity assignment • Pre-process subject network: – Add inverter pairs between NANDs. – Provide signals with both polarity. • Add inverter-pair cell to the library: – To eliminate unneeded pairs. – Cell corresponds to a connection with zero cost. Example Boolean covering • Decompose network into base functions. • When considering vertex $v_i$: – Construct *clusters* by local elimination. – Several functions associated with $v_i$. • Limit size and depth of clusters. Example \[ f_{j,1} = xy; \] \[ f_{j,2} = x(a + c); \] \[ f_{j,3} = (e + z)y; \] \[ f_{j,4} = (e + z)(a + c); \] \[ f_{j,5} = (e + c' + d)y; \] \[ f_{j,6} = (e + c' + d)(a + c); \] Boolean matching \( \mathcal{P} \)-equivalence - **Cluster function** \( f(x) \): sub-network behavior. - **Pattern function** \( g(y) \): cell behavior. - **\( \mathcal{P} \)-equivalence:** - Exists a permutation operator \( \mathcal{P} \), such that \( f(x) = g(\mathcal{P} x) \) is a tautology? - **Approaches:** - Tautology check over all input permutations. - Multi-rooted pattern ROBDD capturing all permutations. Input/output polarity assignment - Allow for reassignment of input/output polarity. - $\mathcal{NP}_N$ classification of Boolean functions. - $\mathcal{NP}_N$-equivalence: - Exists a permutation matrix $\mathcal{P}$, and complementation operators $\mathcal{N}_i, \mathcal{N}_o$ such that $f(\mathbf{x}) = \mathcal{N}_o g(\mathcal{P} \mathcal{N}_i \mathbf{x})$ is a tautology? - Variations: - $\mathcal{N}$-equivalence, $\mathcal{PN}$-equivalence Boolean matching - *Pin assignment* problem. - Map cluster variables $x$ to pattern vars $y$. - Characteristic equation: $A(x, y) = 1$. - Pattern function under variable assignment: - $g_A(x) = S_y A(x, y) g(y)$ - *Tautology problem.* - $f(x) \oplus g_A(x)$ - $\forall x (f(x) \oplus S_y (A(x, y) g(y)))$ Example - Assign \( x_1 \) to \( y'_2 \) and \( x_2 \) to \( y_1 \). - Characteristic equation: \[- A(x_1, x_2, y_1, y_2) = (x_1 \oplus y_2)(x_2 \overline{\oplus} y_1) \] - AND pattern function: \[- g = y_1y_2 \] - Pattern function under assignment: \[- S_{y_1,y_2}A g = \\ \quad = S_{y_1,y_2}(x_1 \oplus y_2)(x_2 \overline{\oplus} y_1)y_1y_2 = x_2x'_1 \] Signatures and filters - Capture some properties of Boolean functions. - If signatures do not match, there is no match. - Used as filters to reduce computation. - Signatures: - Unateness. - Symmetries. - Co-factor sizes. - Spectra. Filters based on unateness and symmetries • Any pin assignment must associate – unate (binate) variables in \( f(\mathbf{x}) \) with unate (binate) variables in \( g(\mathbf{y}) \). • Variables or groups of variables – that are interchangeable in \( f(\mathbf{x}) \) must be interchangeable in \( g(\mathbf{y}) \). Example - Cluster function: $f = abc$. - Symmetries: $\{(a, b, c)\}$ – unate. - Pattern functions: - $g_1 = a + b + c$ * Symmetries: $\{(a, b, c)\}$ – unate. - $g_2 = ab + c$ * Symmetries: $\{(a, b)(c)\}$ – unate. - $g_3 = abc' + a'b'c$ * Symmetries: $\{(a, b, c)\}$ – binate. Concurrent optimization and library binding - Motivation: - Logic simplification is usually done prior to binding. - Logic simplification/substitution can be combined with binding. - Mechanism: - Binding induces some don’t care conditions. - Exploit don’t cares as degrees of freedom in matching. Boolean matching with *don't care* conditions - Given \( f(x), f_{DC}(x) \) and \( g(y) \): - \( g \) matches \( f \) if \( g \) is equivalent to \( \tilde{f} \), where \( f \cdot f'_{DC} \leq \tilde{f} \leq f + f_{DC} \) - Matching condition: - \( \forall x(f_{DC}(x) + f(x) \oplus S_y (A(x, y) g(y))) \) Example - Assume $v_x$ is bound to $OR3(c', b, e)$. - *Don’t care* set includes $x \oplus (c' + b + e)$. - Consider $f_j = x(a + c)$ with $CDC = x'c'$. - No simplification. Mapping into AOI gate. - Matching with DC. Mapping into MUX gate. Example Example Extended matching - Augment pattern function with mux function. - Each cell input can be routed to any cluster input (or voltage rail). - Input polarity can be changed. - Cell and cluster may differ input size. - Define composite function $G(x, c)$: - Pin assignment is determining $c$. - Matching formula: $M(c) = \forall x [G(x, c) \oplus f(x)]$ Example \[ g = y_1 + y_2 y'_3 \] \[ y_1(c, x) = (c_0 c_1 x_1 + c_0 c'_1 x_2 + c'_0 c_1 x_3) \oplus c_2 \] \[ G = y_1(c, x) + y_2(c, x) y_3(c, x)' \] Extended matching modeling - Model composite functions by ROBDDs. - Assume: \( n \)-input cluster and \( m \)-input cell. - For each cell input: * \( \lceil \log_2 n \rceil \) variables for pin permutation. * One variable for input polarity. - Total size of \( \mathbf{c} \): \( m(\lceil \log_2 n \rceil + 1) \). - A match exists if there is at least one value of \( \mathbf{c} \) satisfying \( M(\mathbf{c}) = \forall \mathbf{x} [G(\mathbf{x}, \mathbf{c}) \oplus f(\mathbf{x})] \). Example - $g = x'y$, $f = wz'$ - $G(a, b, c, d, w, z) = (c \oplus (za + wa'))'(d \oplus (zb + wb'))$ - $f \oplus G = (wz') \oplus ((c \oplus (za + wa'))'(d \oplus (zb + wb')))$ - $M(a, b, c, d) = ab'c'd' + a'bcd$ Extended matching - Captures implicitly all possible matches. - No extra burden when exploiting don’t care sets. \[- M(c) = \forall x [G(x, c) \oplus f(x) + f_{DC}(x)]\] - Efficient BDD-based representation. - Extensions to support multiple-output matching. Summary - Library binding is very important. - Rule-based approach: - General, sometimes inefficient. - Algorithmic approach: - Pattern-based: fast, but limited. - Boolean: more general and efficient.
{"Source-Url": "https://si2.epfl.ch/~demichel/publications/mcgraw/overheads/lib.pdf", "len_cl100k_base": 6072, "olmocr-version": "0.1.50", "pdf-total-pages": 63, "total-fallback-pages": 0, "total-input-tokens": 112431, "total-output-tokens": 8050, "length": "2e12", "weborganizer": {"__label__adult": 0.000339508056640625, "__label__art_design": 0.0005979537963867188, "__label__crime_law": 0.0003223419189453125, "__label__education_jobs": 0.00044798851013183594, "__label__entertainment": 7.063150405883789e-05, "__label__fashion_beauty": 0.00016796588897705078, "__label__finance_business": 0.000255584716796875, "__label__food_dining": 0.0003139972686767578, "__label__games": 0.0005779266357421875, "__label__hardware": 0.006725311279296875, "__label__health": 0.0003497600555419922, "__label__history": 0.0002384185791015625, "__label__home_hobbies": 0.0001773834228515625, "__label__industrial": 0.0010461807250976562, "__label__literature": 0.00015819072723388672, "__label__politics": 0.0003001689910888672, "__label__religion": 0.0005273818969726562, "__label__science_tech": 0.043701171875, "__label__social_life": 6.54458999633789e-05, "__label__software": 0.00804901123046875, "__label__software_dev": 0.93408203125, "__label__sports_fitness": 0.00042724609375, "__label__transportation": 0.0007386207580566406, "__label__travel": 0.00021457672119140625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16126, 0.03546]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16126, 0.58134]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16126, 0.71609]], "google_gemma-3-12b-it_contains_pii": [[0, 60, false], [60, 282, null], [282, 672, null], [672, 900, null], [900, 1119, null], [1119, 1352, null], [1352, 1462, null], [1462, 1656, null], [1656, 1882, null], [1882, 2133, null], [2133, 2394, null], [2394, 2578, null], [2578, 2678, null], [2678, 2977, null], [2977, 3350, null], [3350, 3661, null], [3661, 3675, null], [3675, 3688, null], [3688, 3697, null], [3697, 3943, null], [3943, 4160, null], [4160, 4283, null], [4283, 4471, null], [4471, 4479, null], [4479, 4782, null], [4782, 5306, null], [5306, 5512, null], [5512, 5665, null], [5665, 5827, null], [5827, 5957, null], [5957, 6603, null], [6603, 6833, null], [6833, 7126, null], [7126, 7445, null], [7445, 8083, null], [8083, 8408, null], [8408, 8621, null], [8621, 9037, null], [9037, 10163, null], [10163, 10413, null], [10413, 10695, null], [10695, 10703, null], [10703, 10925, null], [10925, 11106, null], [11106, 11536, null], [11536, 11990, null], [11990, 12308, null], [12308, 12685, null], [12685, 12929, null], [12929, 13256, null], [13256, 13555, null], [13555, 13862, null], [13862, 13862, null], [13862, 14174, null], [14174, 14414, null], [14414, 14422, null], [14422, 14430, null], [14430, 14789, null], [14789, 14941, null], [14941, 15440, null], [15440, 15654, null], [15654, 15917, null], [15917, 16126, null]], "google_gemma-3-12b-it_is_public_document": [[0, 60, true], [60, 282, null], [282, 672, null], [672, 900, null], [900, 1119, null], [1119, 1352, null], [1352, 1462, null], [1462, 1656, null], [1656, 1882, null], [1882, 2133, null], [2133, 2394, null], [2394, 2578, null], [2578, 2678, null], [2678, 2977, null], [2977, 3350, null], [3350, 3661, null], [3661, 3675, null], [3675, 3688, null], [3688, 3697, null], [3697, 3943, null], [3943, 4160, null], [4160, 4283, null], [4283, 4471, null], [4471, 4479, null], [4479, 4782, null], [4782, 5306, null], [5306, 5512, null], [5512, 5665, null], [5665, 5827, null], [5827, 5957, null], [5957, 6603, null], [6603, 6833, null], [6833, 7126, null], [7126, 7445, null], [7445, 8083, null], [8083, 8408, null], [8408, 8621, null], [8621, 9037, null], [9037, 10163, null], [10163, 10413, null], [10413, 10695, null], [10695, 10703, null], [10703, 10925, null], [10925, 11106, null], [11106, 11536, null], [11536, 11990, null], [11990, 12308, null], [12308, 12685, null], [12685, 12929, null], [12929, 13256, null], [13256, 13555, null], [13555, 13862, null], [13862, 13862, null], [13862, 14174, null], [14174, 14414, null], [14414, 14422, null], [14422, 14430, null], [14430, 14789, null], [14789, 14941, null], [14941, 15440, null], [15440, 15654, null], [15654, 15917, null], [15917, 16126, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 16126, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 16126, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16126, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16126, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 16126, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16126, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16126, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16126, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 16126, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 16126, null]], "pdf_page_numbers": [[0, 60, 1], [60, 282, 2], [282, 672, 3], [672, 900, 4], [900, 1119, 5], [1119, 1352, 6], [1352, 1462, 7], [1462, 1656, 8], [1656, 1882, 9], [1882, 2133, 10], [2133, 2394, 11], [2394, 2578, 12], [2578, 2678, 13], [2678, 2977, 14], [2977, 3350, 15], [3350, 3661, 16], [3661, 3675, 17], [3675, 3688, 18], [3688, 3697, 19], [3697, 3943, 20], [3943, 4160, 21], [4160, 4283, 22], [4283, 4471, 23], [4471, 4479, 24], [4479, 4782, 25], [4782, 5306, 26], [5306, 5512, 27], [5512, 5665, 28], [5665, 5827, 29], [5827, 5957, 30], [5957, 6603, 31], [6603, 6833, 32], [6833, 7126, 33], [7126, 7445, 34], [7445, 8083, 35], [8083, 8408, 36], [8408, 8621, 37], [8621, 9037, 38], [9037, 10163, 39], [10163, 10413, 40], [10413, 10695, 41], [10695, 10703, 42], [10703, 10925, 43], [10925, 11106, 44], [11106, 11536, 45], [11536, 11990, 46], [11990, 12308, 47], [12308, 12685, 48], [12685, 12929, 49], [12929, 13256, 50], [13256, 13555, 51], [13555, 13862, 52], [13862, 13862, 53], [13862, 14174, 54], [14174, 14414, 55], [14414, 14422, 56], [14422, 14430, 57], [14430, 14789, 58], [14789, 14941, 59], [14941, 15440, 60], [15440, 15654, 61], [15654, 15917, 62], [15917, 16126, 63]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16126, 0.09188]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
c4f5ad23f011f3192473fdfa0893dd457e5f16e4
Implementation of IEEE Std 1149.1-1990 in VHDL Peter M. Campbell, Mankuan Vai, Zainalabedin Navabi Electrical and Computer Engineering Department Northeastern University 409 Dana Research Center Boston, Massachusetts 02115 617-437-5413 campbell@nuvisi.coe.northeastern.edu / vai@northeastern.edu / navabi@northeastern.edu Abstract This paper describes the implementation of IEEE Std 1149.1-1990, IEEE Standard Test Access Port and Boundary-Scan Architecture, using behavioral VHDL (VHSIC Hardware Description Language). The IEEE 1149.1 standard provides a structured method for implementing testability in circuit designs and may be used to provide many different levels of testability. By implementing IEEE Std 1149.1-1990 in VHDL, designs which use the standard may be constructed and simulated to determine the operation of the design and the effectiveness of the included testability. This paper describes the basic components of IEEE 1149.1 as well as the test bench used to stimulate the finished logic. The test bench includes low-level and high-level functions which ease the test application process, provide high-level control, and are portable between different implementations of the test logic. An example which employs the test logic and uses the test bench functions for test application is included. 1. Introduction In the past few years, VLSI technology has improved so dramatically that the number of transistors possible on a single chip has reached the millions. With this many devices present, it becomes nearly impossible to test the chip externally for proper operation. As a result, incorporating testability into the design of the chip has increased tremendously in importance. The Institute of Electrical and Electronics Engineers has developed a test standard, IEEE Std 1149.1-1990, to assist in the test and maintenance of assembled printed circuit boards [1]. This standard defines the operation of test points within the circuit as well as a standard communications interface to access and control the test points. By strategically placing test points within the circuit, the testing of interconnections between distinct modules (i.e. circuit partitions, chips, circuit boards) as well as the testing of individual circuits may be accomplished. To the best of our knowledge, it appears that little work has been performed on implementing IEEE 1149.1 in VHDL [2]. Other related research has yielded a language which can be used to describe boundary-scan devices using a subset of VHDL [3]. The paper proposes that if a device is not describable by the language, then the device does not conform to the IEEE 1149.1 standard. Despite the obvious advantages of such a language, it lacks any simulation capabilities which is one of the objectives of this project. Another benefit of this project is that high-level test procedures are provided, allowing the operator to specify the function to be performed rather than the specific bit patterns to be used. Furthermore, these high-level procedures may be used directly with different implementations of the test logic in VHDL, providing the finished test code with a high degree of portability. 2. Overview of IEEE 1149.1 As stated in the IEEE Std 1149.1-1990 manual, the goals of the standard were to provide a standardized approach to: - testing the interconnections between integrated circuits once they have been assembled onto a printed circuit board or other substrate - testing the integrated circuit itself - observing circuit activity during the normal operation of the component(s) The primary testing technique used in IEEE 1149.1 is called Scan-Design [4]. Scan-design can make testing easier by allowing internal circuit nodes to be observed and controlled without the use of a large amount of I/O pins. This technique uses registers, called Scan-Registers, which have both shift and parallel-load capability. Circuit nodes which are not directly accessible can be controlled and observed by placing a scan-cell (Figure 1) on the node. One type of scan-design is Boundary-Scan in which a scan-cell is placed at every module input and output, forming a Boundary-Scan Register (BSR). Using this technique, the signals at the input and output pins may be verified by shifting the values through the Boundary-Scan Register. The component itself may be tested by shifting test data into the cells at the circuit inputs. Interconnections may be verified by shifting data into the cells placed at the circuit outputs and sampling the data at the input cells of the other connected components. The IEEE 1149.1 standard also allows scan cells to be placed at points in the design other than the component pins. This simplifies the test problem, allowing the component circuitry to be divided into sub-components which may be easier to test. 2.1. Test Logic There are four elements which are required in the test logic. The elements are listed and described below: - Test Access Port (TAP) - TAP controller - Instruction Register (IR) - Data Registers (DR) 2.1.1 Test Access Port (TAP) The Test Access Port (TAP) provides a standard interface for communication between the test logic and external test equipment or busses. The TAP consists of four signals, listed below. TCK - Test Clock. Clock for the test logic. TMS - Test Mode Select. Signals at this input are decoded by the controller to control test operations. TDI - Test Data Input. Serial input for test logic instructions and data. TDO - Test Data Output. Serial output for test instructions and data from test logic. 2.1.2 TAP Controller The TAP controller generates the internal clock and enable signals required by the test circuitry. It is a synchronous finite state machine consisting of 16 states, only one of which may be active at a time. State transitions are based upon the value of TMS and occur only on a rising edge of TCK. The state diagram is shown in Figure 2. The controller provides the three basic actions required for testing: stimulus application, execution and response capture. The states corresponding to these actions and other important controller states are described below. ![TAP Controller State Diagram](image) **Figure 2 - TAP Controller State Diagram** **Test-Logic-Reset** The test logic is disabled in this state so that the component logic can operate normally. The controller will return to this state when TMS is high for five consecutive rising edges on TCK, regardless of the original state. When this state is entered, the Instruction Register output lines are initialized to contain the *BYPASS* instruction (see Section 2.2). **Run-Test/Idle (test execution)** When certain user-defined instructions are present in the Instruction Register, this state is active and these instructions are executed. Instructions which cause no functions to execute do not change the test data registers. **Capture-DR / Capture-IR (response capture)** Data is loaded in parallel into the Data Register(s) / Instruction Register selected by the current instruction. Shift-DR / Shift-IR Data is shifted through the register connected between TDI and TDO. The data is shifted one stage for every rising edge of TCK. Update-DR / Update-IR (stimulus application) Data is latched onto the parallel outputs of the Data Register(s) / Instruction Register. This is done to prevent the outputs from changing as data is shifted through the register. 2.1.3 Instruction Register (IR) This register stores the instruction which selects the test to be performed, the Data Register to be accessed, or both. The Instruction Register (IR) is comprised of a chain of two or more Instruction Register Cells (Figure 3) which allow data to be serially loaded through the TDI input. Data is latched into the IR in the Capture-IR state while data is latched onto the IR outputs in the Update-IR controller state. The IR output lines contain the current instruction. There must be a single IR for every TAP controller in a design. ![Figure 3 - Instruction Register Cell](image) 2.1.4 Data Registers (DR) The set of test data registers (DRs) must include a single Bypass Register and a single Boundary-Scan Register. Optionally, a single Device-Identification Register may be included along with Design-Specific Test Registers. The Bypass Register (BR) is a single-stage shift-register whose purpose is to allow the test data registers within this device to be skipped. This can shorten the path between the system TDI and TDO when several IEEE 1149.1 compliant devices have their scan-paths connected. Only one Boundary-Scan Register (as described in Section 2) is allowed per component. Included as an option, the Device-Identification Register allows information about the component (such as the part number and manufacturer) to be stored within the component. Other optional registers are Design-Specific Test Data Registers which may be used when additional test features (such as test points and self-tests) are required. It is required that the Boundary-Scan and Bypass Registers each provide a direct connection between TDI and TDO. Aside from this rule, registers may be connected in any fashion. The DR to be exercised is usually selected based on the value in the IR. Each register (or combination of registers) must have a unique name, i.e. if two registers are connected to make a larger register, the new register must have a name distinct from its sub-registers. The length of each possible DR must be constant. 2.2. Instructions Instructions are used to select the test function to be performed and/or the registers to be used. There are three mandatory instructions, \textsc{Bypass}, \textsc{Sample/Preload}, and \textsc{Extest}, which are described below. Other instructions may be defined by the component designer. \textbf{Bypass} The \textsc{Bypass} instruction is the only instruction which uses the Bypass Register. The binary code for the \textsc{Bypass} instruction is "11...1" (i.e. logic '1' stored in every IR cell). Other binary codes may also be used for this instruction. This instruction must be loaded onto the IR outputs when the \textit{Run-Test-Idle} controller state is entered. \textbf{Sample/Preload} The \textsc{Sample/Preload} instruction samples data at the parallel inputs to the selected Boundary-Scan Register and allows data to be shifted into the register. Only the Boundary-Scan Register may be selected. The data at the system pins is loaded into the register in the \textit{Capture-DR} controller state. The data in the shift-register stage is shifted through in the \textit{Shift-DR} state, and is latched onto the parallel output buffers of the register in the \textit{Update-DR} controller state (usually through use of the \textsc{Extest} instruction). The binary value of the \textsc{Sample/Preload} instruction may be selected by the designer. \textbf{Extest} The \textsc{Extest} instruction permits testing of off-chip circuitry and interconnections. The data stored in the output-pin Boundary-Scan Register cells is applied and data at the input pins is latched into the register. The data in the shift-registers/latches is typically loaded using the \textsc{Sample/Preload} instruction. The \textsc{Extest} instruction may only select the Boundary-Scan Register. The binary value of the instruction is "00...0" (i.e. a logic '0' is placed on the output of every IR cell). Note that the cells at the input pins may be designed to allow signals to be driven onto the logic inputs when this instruction is selected (in order to prevent misoperation when performing the interconnect test). 3. VHDL Implementation 3.1 Test Logic The basic test logic is implemented in several parts which are described below. - The TAP controller - Instruction, Data and Bypass Register cells - Instruction and Data Registers created from their respective cells - The creation of finished test logic using additional logic (multiplexers) and interconnections Although the basic structure of the test logic is generally the same, the final configuration depends on the designer and on the design itself. The test logic described in the example (Figure 8) uses the minimum entities required by the standard and is in the simplest configuration possible. It contains a TAP controller, an Instruction Register (IR), a Boundary-Scan Register (BSR), a Bypass Register (BR) and the system logic. 3.2. Test Bench A test bench is a VHDL description which provides stimuli to the input nodes and can observe the output nodes of the circuit under test. A test bench has no external ports, as noted by the absence of a \textsc{Port} clause in the declaration. The test bench specifies the waveforms applied to the input ports of the circuit under test and allows the output ports to be observed. A specific design style is used within the test bench presented here. This style has several advantages: it allows tests to be specified sequentially as procedure calls, it allows test data files to remain open (without being reset) for subsequent procedure calls, and it permits signal assignments to occur in parallel with the operation of the test logic. The style is shown in Figure 4. To simplify the test-application process, several packages may be made visible to the test bench. These packages are called micro_procedures, macro_procedures, and operation_macros. The level of abstraction increases in each package, reducing the amount of code needed to specify a test sequence or operation. 3.3 Micro-Procedures Low-level procedures, called Micro-procedures, are used to assign signals directly to the TAP inputs. Through the use of overloading, the signal values to be applied can be specified as either a bit vector or a datafile. Procedures are also included to modify the characteristics of the test bench. The design style used in the test bench allows sequential procedure calls which use the datafile to continue from the last item read in the file. The micro-procedures are listed in Figure 5 (overloaded procedures are listed only once). Note that the coding of micro-procedures generally depends upon the implementation of the test logic in VHDL. ``` TYPE bit_data IS FILE OF CHARACTER; PROCEDURE write_clock ( time_low, time_high : IN TIME ); PROCEDURE read_clock ( time_low, time_high : OUT TIME ); PROCEDURE write_state ( current_state : IN INTEGER ); PROCEDURE read_state ( current_state : OUT INTEGER ); PROCEDURE write_next_time ( time_value : IN TIME ); PROCEDURE read_next_time ( time_value : OUT TIME ); PROCEDURE goto ( SIGNAL theclock, thesignal, thedata : OUT BIT; to_state : IN INTEGER ); PROCEDURE assign_bits ( SIGNAL theclock, thesignal, thedata : OUT BIT; VARIABLE thesignal file : IN bit_data; data_vector : IN BIT_VECTOR ); ``` Figure 5 - Micro_procedures Package Procedures The clock duty cycles may be specified using the write_clock and read_clock procedures. The write_state and read_state procedures respectively store and read the active state of the machine. The goto procedure uses read_state and write_state to generate a bit pattern which will change the state of the machine from the current state to the desired state. The write_next_time and read_next_time procedures are used to allow sequential calls to assign_bits without overwriting... old transactions. The assign_bits procedure is used to apply signals to TMS (thesignal) and TDI (thedata) synchronized with TCK (theclock) using clock parameters provided by read_clock. The procedure calls read_next_time to determine when the next signal assignment can be made without overwriting old transactions. Before the procedure ends, write_next_time is called to store the next time a signal assignment can be made, preventing the new signal assignments from being overwritten. 3.4 Macro-procedures Higher-level procedures, called Macro-procedures, use micro-procedures to apply signals to the TAP inputs, allowing the test bench to specify the function(s) to be performed rather than the actual bit patterns to be applied. These procedures are stored in the macro_procedures package. As with the micro-procedures, overloading is used to allow signal values to be specified as either a bit vector or a datafile, and sequential procedure calls using a datafile continue from the last item read in the file. The macro-procedures are listed in Figure 6 (overloaded procedures are listed only once). The operations performed by macro-procedures are basically defined by the IEEE 1149.1 specification and are independent of the VHDL implementation of IEEE 1149.1 and of the final design of the test logic and target circuit (i.e. they are general enough to be used in testing any IEEE 1149.1 design). This portability between different logic implementations allows macro-procedures to be used in a range of designs without being rewritten. Note that the micro-procedures may require recoding for each type of test logic. \begin{verbatim} PROCEDURE dr_scan ( SIGNAL theclock, thesignal, thedata : OUT BIT; VARIABLE thedata_file : IN bit_data; number_of_cycles : IN INTEGER ); PROCEDURE ir_scan ( SIGNAL theclock, thesignal, thedata : OUT BIT; VARIABLE thedata_file : IN bit_data; number_of_cycles : IN INTEGER ); \end{verbatim} Figure 6 - Macro_procedures Package The dr_scan and ir_scan procedures load a bit pattern into the TAP Data Register and Instruction Register, respectively, and leave the controller in the EXITI-DR/IR state. The procedures use the goto, assign_bits and write_state procedures to perform their function. 3.5 Operation Macros An even higher level of procedures may be defined as Operation Macros. These operation macros are similar to macro-procedures but are written for a specific design incorporating IEEE 1149.1 and generally cannot be directly shared between different designs. Typically, operation macros would be written after the required test operations are known and have been developed. Operation macros are contained in the operation_macros package. An operation macro developed for the example is shown in Figure 7. The extest_op macro loads the Sample-Preload instruction into the IR, loads a bit pattern into the BSR, loads the Extset instruction into the IR and then goes to the Update-IR state to apply the data stored in the BSR. \begin{verbatim} PROCEDURE extest_op ( SIGNAL theclock, thesignal, thedata : IN BIT; length : IN INTEGER ); \end{verbatim} Figure 7 - Operation Macros Package 4. Example Implementation This example is based on a nibble-comparator. The comparator is designed to allow several comparators to be connected in a bit-slice fashion to create a larger comparator. It has two 4-bit inputs (a and b), three mode inputs (a_gt_b, a_eq_b, and a_lt_b) and three mode outputs (gtr, eql, and lss). The comparator is made IEEE 1149.1 compliant by adding boundary-scan cells to the inputs and outputs, thereby forming a BSR. The scannable comparator is wired with a TAP Controller, an IR, a Bypass Register (BR) and assorted glue logic to create the finished design (Figure 8). The test bench applies signals to the comparator inputs and the TAP inputs simultaneously. The test is designed to allow the comparator to operate normally for a period of time. The test logic is kept inactive until midway through the comparator input sequence (1000 NS), at which point data is loaded into the boundary-scan register and applied, effectively blocking the comparator inputs and performing an interconnection test. The TAP controller then returns to the Test-Logic-Reset state, making the test logic inactive. The test bench Architecture is shown in Figure 9. 5. Conclusions As circuits grow in complexity, the importance of testability in the design process will increase dramatically as will the use of Hardware Description Languages. The IEEE Std 1149.1-1990 test standard provides a structured method of incorporating testability into a design and makes it easier for modules from different suppliers to become part of a testable system. In implementing IEEE 1149.1 in VHDL, a model of the standard is created, allowing an entire system which uses the standard to be simulated and the test patterns developed before the system is constructed. The test procedures developed for the VHDL implementation of IEEE 1149.1 ease the test application process, allow for high-level control and are portable among different implementations of the test logic. 6. References USE WORK.micro_procedures.ALL; USE WORK.macro_procedures.ALL; ENTITY testable_nibble_comparator_tester IS END testable_nibble_comparator_tester; ARCHITECTURE io OF testable_nibble_comparator_tester IS COMPONENT testable_nibble_comparator PORT ( a, b : IN BIT_VECTOR,... END COMPONENT; CONSTANT test_file : STRING := "test_data"; CONSTANT extest : BIT_VECTOR (1 DOWNTO 0) := "00"; CONSTANT sample_preload : BIT_VECTOR (1 DOWNTO 0) := "01"; SIGNAL a, b : BIT_VECTOR (3 DOWNTO 0); SIGNAL tck, tms, tdi, tdo, gtr, eql, lss : BIT; BEGIN pl : PROCESS FILE thatest_file : bit_data IS IN test_file; -- Test vector file BEGIN -- Stay in Run-Test-Idle state (to disable test logic) -- until time = 1000 NS (i.e. for 10 cycles). assign_bits ( tck, tms, tdi, "1", "0", 10 ); -- Load Sample/Preload instruction code into IR. ir_scan ( tck, tms, tdi, sample_preload ); -- Load 14-bit test vector into BSR from "test_data". dr_scan ( tck, tms, tdi, thatest_file, 14 ); -- Load Exttest instruction code into IR so data is driven -- onto BSR outputs in the Update-IR state. ir_scan ( tck, tms, tdi, extest ); -- Go to the Test-Logic-Reset state to disable test logic. -- Pass thru Update-IR state so new instruction is present. goto ( tck, tms, tdi, 15 ); -- Keep the clock running until comparator data is depleted. assign_bits ( tck, tms, tdi, "0", "0", 5 ); WAIT; END PROCESS pl; END io; Figure 9 - TAP_example Test Bench
{"Source-Url": "http://www.eda.org/VIUF_proc/Spring92/CAMPBELL92A.PDF", "len_cl100k_base": 4792, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 10546, "total-output-tokens": 5494, "length": "2e12", "weborganizer": {"__label__adult": 0.0007271766662597656, "__label__art_design": 0.0008115768432617188, "__label__crime_law": 0.0006251335144042969, "__label__education_jobs": 0.0009131431579589844, "__label__entertainment": 0.00013935565948486328, "__label__fashion_beauty": 0.00038552284240722656, "__label__finance_business": 0.00036787986755371094, "__label__food_dining": 0.0006303787231445312, "__label__games": 0.0009226799011230468, "__label__hardware": 0.055694580078125, "__label__health": 0.0009508132934570312, "__label__history": 0.0005350112915039062, "__label__home_hobbies": 0.000316619873046875, "__label__industrial": 0.003025054931640625, "__label__literature": 0.0002472400665283203, "__label__politics": 0.0004417896270751953, "__label__religion": 0.0010395050048828125, "__label__science_tech": 0.291748046875, "__label__social_life": 9.268522262573242e-05, "__label__software": 0.01032257080078125, "__label__software_dev": 0.62744140625, "__label__sports_fitness": 0.0005617141723632812, "__label__transportation": 0.001720428466796875, "__label__travel": 0.0003101825714111328}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22535, 0.02896]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22535, 0.71101]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22535, 0.87938]], "google_gemma-3-12b-it_contains_pii": [[0, 3177, false], [3177, 5574, null], [5574, 7054, null], [7054, 9499, null], [9499, 12807, null], [12807, 15371, null], [15371, 18526, null], [18526, 20498, null], [20498, 22535, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3177, true], [3177, 5574, null], [5574, 7054, null], [7054, 9499, null], [9499, 12807, null], [12807, 15371, null], [15371, 18526, null], [18526, 20498, null], [20498, 22535, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22535, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22535, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22535, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22535, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22535, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22535, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22535, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22535, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22535, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22535, null]], "pdf_page_numbers": [[0, 3177, 1], [3177, 5574, 2], [5574, 7054, 3], [7054, 9499, 4], [9499, 12807, 5], [12807, 15371, 6], [15371, 18526, 7], [18526, 20498, 8], [20498, 22535, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22535, 0.0]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
823a07ae916acc096cb0ca48fbe17c325ba989db
SUMMARY This paper considers a number of possible ways of implementing the Ada rendezvous in a computer system comprising a number of processors. It shows that, in principle, a two phase protocol requiring four messages to be passed is necessary to implement the rendezvous correctly when timed calls are used. However in many cases a simpler, one phase, protocol requiring only two messages per rendezvous can be used. A comparison is made between the rendezvous and message based communication for situations where one to many, rather than one to one, communication is required. The rendezvous is shown to be very inefficient for implementing one to many communication. Finally some of the problems of loading and running Ada programs are briefly considered. 1) Introduction The purpose of this document is to discuss the problems of implementing Ada on a Multiple Processor Computer System and to compare possible ways of overcoming these problems. We shall be concerned primarily with how a rendezvous can be achieved between tasks running on different computers, as this seems to be the main technical problem. We will compare the different ways of implementing the rendezvous in terms of the number of inter-computer messages, and context switches which they require as these are (probably) the two most important aspects of the overheads involved in providing the rendezvous. 2) Assumptions We make the following basic assumptions about the characteristics of the Multiple Processor System (MPS) on which we wish to implement Ada. First, it seems that the implementation of the rendezvous in a system having shared main memory can be achieved simply by extending a single processor implementation. It is less obvious how to implement the rendezvous without shared main memory, so we will concentrate our attention on such Multi Computer Systems (MCS). As a consequence we will assume that parameters and results are passed by value rather than by reference. Second, we assume that there is a kernel running in each computer which supports the Ada tasks, performs scheduling and provides the mechanisms for inter-computer communication. We assume that the kernel need not be written in Ada. Third, we assume that the communication system provides the abstraction of total reliability, so we will not consider the problems of lost messages etc. Implicitly this means that the transmission of a single message, at the user program level, may require many low level messages to be transmitted. We will always quote the number of messages required by a particular implementation of the rendezvous at the user program level as this gives the simplest basis for comparison. Finally, we distinguish two different types of context switch. A Task Context Switch (TCS) occurs when a task is scheduled or suspended for some reason (e.g. waiting for an entry call to be accepted). An Interrupt Context Switch (ICS) occurs when an interrupt handler is entered or exited. We will assume that an ICS only occurs twice per received message although this may be very far from the truth with word or byte oriented communication systems. One might expect that the time required to perform an ICS will be smaller than that required to perform a TCS, although this will not universally be true. It is perhaps worth noting that the Nassi - Habermann optimisation cannot be performed when the communicating tasks are in different computers. However it might be possible to exploit this optimisation if shared main memory is available. 3) Simple Rendezvous By a simple rendezvous we mean one where the calling task calls the entry unconditionally (and without timeouts), and the called task unconditionally accepts the call (although not necessarily immediately it is issued). Naturally we assume that the calling and called tasks are running on separate computers. The simple rendezvous can be implemented as indicated below: This straightforward implementation requires two messages to be passed between the computers, and requires two TCS in each machine to handle the receipt of the messages. In the calling task two TCS are required. In the called task up to two TCS may be required, but the number depends on whether or not the task is active immediately before and after the rendezvous. This comment applies throughout the following discussion. If necessary exceptions can be returned to the calling task: e.g. Tasking Error can be returned if the called task has terminated. 4) Conditional Entry Calls These can be implemented in the same basic way as the simple rendezvous, except that a rejection may have to be returned, rather than the result of executing the entry procedure. It is possible that two TCS will be required in the called task even if the call is rejected. These context switches can be avoided if the kernel (interrupt handling routines) can preserve enough information about the called task to know whether or not a rendezvous is possible, without entering the environment of the called task. 5) Selective Waits Selective waits can be implemented using the basic method described above. 5) Timed Entry Calls 5.1) Introduction If timed entry calls were implemented by the above mechanism it is possible that the timeout might expire, causing the calling task to execute its alternative code, whilst the called task was executing the entry. This eventuality would be a violation of the rendezvous semantics, and clearly must be avoided. For a timed entry call, the timing is performed on acceptance of the entry call, not on the execution of the entry. This means that (in principle at least) information needs to be passed back to the calling task once the call is accepted, as well as on the completion of the call. This is the basis of our first alternative solution below. 6.2) Simple Approach The simplest approach to implementing time timed entry calls seems to be to use a two phase "handshake" protocol. The first phase governs the acceptance of the call, and the second is equivalent to the protocol described in section 3 controlling the execution of the entry. The execution of the protocol would be as follows if the call were successful: <table> <thead> <tr> <th>Calling Task</th> <th>Called Task</th> </tr> </thead> <tbody> <tr> <td>entry call</td> <td>call_msg</td> </tr> <tr> <td>suspend</td> <td>schedule</td> </tr> <tr> <td></td> <td>accept</td> </tr> <tr> <td></td> <td>report accept</td> </tr> <tr> <td>schedule</td> <td>accept_ack</td> </tr> <tr> <td>call confirm</td> <td>suspend</td> </tr> <tr> <td>suspend</td> <td>confirm_msg</td> </tr> <tr> <td></td> <td>schedule</td> </tr> <tr> <td></td> <td>execute_entry</td> </tr> <tr> <td></td> <td>return</td> </tr> <tr> <td></td> <td>suspend</td> </tr> <tr> <td></td> <td>suspend</td> </tr> </tbody> </table> This requires four messages, four TCS for each task, and four ICS for each task. Because of the timeout it is possible for the called task to accept the entry call after the caller's timeout has expired, and the task has continued execution. There are two ways of recovering from this situation: the calling task can be "rolled back", so that the entry is executed; or the acceptance of the call can be cancelled. The former course may be very difficult, especially if the calling task has entered into communication with other tasks, or performed some I/O before the accept message is received. The latter course only requires the information recording the acceptance of the call to be changed (as the called task will be suspended). Clearly the latter approach is much simpler to achieve and it correctly implements the semantics of the timed call. The behaviour will be as follows for a call which is not accepted before the timeout expires: <table> <thead> <tr> <th>Calling Task</th> <th>Called Task</th> </tr> </thead> <tbody> <tr> <td>entry-call</td> <td>call_msg</td> </tr> <tr> <td>suspend</td> <td>schedule</td> </tr> <tr> <td></td> <td>accept</td> </tr> <tr> <td>time out</td> <td>report accept</td> </tr> <tr> <td>suspend</td> <td>accept_ack</td> </tr> <tr> <td>abort entry</td> <td>suspend</td> </tr> <tr> <td></td> <td>abort_msg</td> </tr> <tr> <td></td> <td>schedule</td> </tr> <tr> <td></td> <td>cancel_accept</td> </tr> <tr> <td></td> <td>suspend</td> </tr> </tbody> </table> This requires three messages, two TCS in the calling task, and four in the called task. There will be two ICS in the calling task, and four in the called task. A problem which arises with this implementation is dealing with accept messages which arrive long after the timeout has expired, and the calling task has continued with its alternative action. There seem to be two possible approaches. The kernel could maintain information regarding the incomplete rendezvous until the accept message is received and the abort message can be generated. This technique may give problems with storage management if large numbers of rendezvous have to be "remembered". Alternatively, the incomplete rendezvous can be forgotten and the kernel can respond to any unrecognised accept message with an abort message. This latter technique will not impose storage overheads but it has the disadvantage that it will not help in the detection of certain error conditions, such as trying to communicate with a task that has already terminated. Clearly this straightforward implementation is quite costly in terms of messages and context switches. There are, however, more efficient ways of implementing the rendezvous which may be applicable in some circumstances. These methods rely on being able to execute the timeout in the called task. 6.3) Timing at Both Ends When calling the entry, the calling task could transmit the timeout duration to the called task. Assuming that both tasks have access to clocks running at roughly similar rates, then the called task can inspect the timeout period and not reply if it knows that it will not be able to accept the call quickly enough. Unfortunately we cannot guarantee to avoid the situation where an accept message is received after the caller's timeout has expired (unless the timing of the communication etc. is deterministic and well known) so we must still cater for the possibility of having to abort the rendezvous. 6.4) Timing only at Called End If the message delay through the communication system is well known, deterministic, and short with respect to the timeout period then it may be possible for the called task to execute the timeout on behalf of the calling task. If this is the case then we can use a single phase protocol for performing the rendezvous as we described in section 3. The only change to the protocol of section 3 is that the caller task can return a "timed out" message to the caller in order to allow it to continue without performing the rendezvous. This implementation is attractive in that it is comparatively efficient. However there is a problem to do with reliability. If the processor running the called task fails then the calling task may never (or very belatedly) receive its timeout message. One of the reasons for using timed entry calls may be to allow detection of, and recovery from, remote failures, in which case timing at the called end may not be satisfactory. Arguably it should be the responsibility of the kernel to detect remote failures and to return a suitable exception to the task. However the time taken by the kernel to detect the failure may be much longer than the time the calling task is willing to wait. It seems therefore that there will be circumstances under which the two phase protocol will have to be used. 7) Summary of Rendezvous Implementations We have described a number of ways in which the rendezvous can be implemented. The most efficient method which we can use is a simple, one phase protocol as described in section 6.4. This protocol can cater for the simple rendezvous, conditional rendezvous and, under some circumstances, with timed rendezvous. This protocol will typically require two messages, two TCS per task, and two ICS per task. For the circumstances where the one phase protocol is not acceptable, e.g. where we wish to recover from the failure of remote computers, then the two phase protocol described in section 6.2 will have to be used. This protocol has twice the overhead of the single phase protocol if the rendezvous is completed successfully. If the rendezvous is not completed (due to a timeout expiring) then the overheads are rather lower. 9) Comparison with Message Passing 9.1) Message Passing Paradigms There are essentially three distinct forms of message passing inter process communication scheme. The simplest simply consists of the sender transmitting a message, and continuing execution without an acknowledgement ever being returned. This scheme does not make it easy to detect remote failures or lost messages, but may be quite appropriate under certain circumstances - e.g. transmission of data from a sensor, where the loss of the occasional reading will not adversely affect the behaviour of the system. The second paradigm is that of one process sending a message, then waiting until the message is acknowledged by the process which received the message. This method gives scope for detecting and recovering from certain classes of errors (e.g. lost messages caused by communications failures), and clearly matches the semantics of the rendezvous. The third paradigm is that of sending a message and receiving an explicit acknowledgement, with the sending process able to continue processing between sending the message and receiving the acknowledgement. The advantage this offers over the second method is that concurrency is improved since the sending process may be performing useful work whilst waiting for an acknowledgement. This form of behaviour can only be achieved in Ada by the artifice of creating a task specifically for performing the inter-task communication, thus allowing the parent task to continue executing. This technique can have severe disadvantages as described below. 8.2) The Two Phase Commit Protocol We describe the implementation of the Two Phase Commit Protocol in Ada as it serves to show the problems which arise from the fact that a task cannot continue executing between sending a message and receiving an acknowledgement (i.e. the task is suspended whilst the rendezvous is in progress). We believe this protocol to be a very salient example as it is widely used as a way of ensuring consistency control in distributed database systems. We can adequately describe the most important features of the Two Phase Commit Protocol (2PC) by the following example. Imagine that we have a replicated database with a total of N copies, and we wish to update the database so that all the copies remain in step. In essence we need to update all the copies indivisibly. This is achieved by one task (the control task) notifying all the copies that an update is to be performed, and the tasks responsible for the copies either acknowledge that they can perform the update, or say that the update has to be aborted because it conflicts with some update already in progress. This is the first, or notification, phase. The originating task then either informs all the cooperating tasks that the update must be aborted, or instructs them to perform the update, as appropriate. In the latter case the tasks will update their copies of the database, then return acknowledgments to the originating task. This is the second, or update, phase. Clearly the implementation of the protocol will be complicated by the need to deal with failures of remote computers etc., but the basic form is not affected by these considerations. Using our third paradigm for the message passing model, the 2PC control task can transmit messages to all N tasks, then wait for acknowledgments, in both the first and second phases. Thus the message passing discipline allows us to achieve a high degree of concurrency. If we implemented the 2PC protocol in Ada the simplest way would be to perform the communication with each of the N cooperating tasks in turn, thus requiring rendezvous in series in the first phase, and similarly for the second phase. Note that this already requires twice as many messages and context switches as the message based implementation. This implementation will obviously be slow as the inherent parallelism is lost. In order to improve parallelism we could implement the protocol so that the controlling task spawned N subtasks to perform the communication. This has the undesirable side effect of increasing the number of TCS as control passes between the main and the subtasks. If separate subtasks were used for each phase, and task/subtask synchronisation can be achieved by use of shared variables, then this requires at least another 9N TCS, making a total of 16N TCS, four times the number required using message passing. It will be quite awkward determining when each phase has finished - perhaps the easiest way being to use the subtask statuses to determine when they have all terminated. It might be possible to improve on these overheads by creating N subtasks for the duration of the 2PC operation, however this would mean that we would have to use rendezvous between the subtasks and the control task in order to signal the end of each phase. This still requires 9N TCS, but would involve less process creation and destruction which must themselves be expensive operations. However it is possible that some of these context switches can be eliminated by performing optimisations, analogous to the Nassi - Habermann optimisations. Some MCS communication systems provide broadcast facilities which can provide very efficient 1:many communication. The Ada rendezvous does not allow such hardware to be exploited. In short it seems that Ada will enforce very expensive, and quite complex implementations of protocols of the above form. Since this form of protocol will be at the heart of any Distributed Database Manager, and of many other MCS applications, Ada may be a very poor choice of implementation language from the point of view of efficiency, and simplicity of implementation. 9) Program Loading and Hardware Mapping The mapping of the Ada program onto the available hardware must be specified at some stage in the program development and loading process, unless the mapping is to be chosen automatically by the programming support environment. It should be fairly straightforward to specify the mapping either as pragmata in the program source text, or as commands to the program loading system, although it may be difficult to decide what this mapping should be. We are not concerned with the problems of deciding on a mapping, rather on what should be loaded, and how it can be executed. In particular we are concerned with what should happen to the "main body" of the program. For the sake of simplicity let us assume that the main body of the program consists of a sequence of declarations of tasks which are to be run on a number of separate machines. Clearly the code for the individual tasks should be loaded on the designated machines together with code for instantiating the tasks. Address information to allow inter-task communication must also be made available to the kernel (this may be regarded as the vestige of the declarations of the other tasks). We can rely on the semantics of the rendezvous to ensure correct behaviour despite the fact that tasks which wish to communicate may be created at significantly different times. The above solution is satisfactory so long as no task tries to create a task to run in another machine. If tasks can be created in other machines, then a mechanism has to be provided for one kernel to request another to create and run a task. This facility may cause problems if the processes in separate machines wish to share data, and it may also lead to difficulties in scheduling, and in assessing the amount of mill time absorbed by any task, etc. Similar comments apply to the inclusion of executable code in the main body of the program. It would seem to be by far the simplest if Ada programs to run in MCS were restricted in the following ways. First, the main body may only consist of the declaration of tasks. Second, no task may create a task to run on another machine. It seems likely that these restrictions would be acceptable in practice. 10) Conclusions We have described some possible ways in which an Ada rendezvous could be implemented in an MCS. We have also considered the overheads of using the rendezvous where we wish to perform 1:N, rather than 1:1, communication. We have shown that the overheads of using the Ada rendezvous as opposed to a message based communication system are quite large. This is, perhaps, not surprising as procedure calls are fundamental within a single computer, but message passing, rather than remote procedure calls, are fundamental to communication systems. We have briefly considered the problem of loading and executing Ada programs, and we have suggested a rule for constructing and mapping Ada programs which would simplify their implementation on an MCS. We have not covered all the important issues to do with implementing Ada on an MCS. For example we have ignored problems of deciding on a good software to hardware mapping; how we test and monitor program execution; how we extend or replace parts of the running program etc. This omission can be justified by saying that the other problems are pertinent to other programming languages, not just Ada. What we have tried to do is to concentrate on those problems which seem to be peculiar to Ada.
{"Source-Url": "https://apps.dtic.mil/sti/pdfs/ADA114604.pdf", "len_cl100k_base": 4430, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 26108, "total-output-tokens": 4674, "length": "2e12", "weborganizer": {"__label__adult": 0.00023257732391357425, "__label__art_design": 0.00019729137420654297, "__label__crime_law": 0.0002624988555908203, "__label__education_jobs": 0.000255584716796875, "__label__entertainment": 5.0008296966552734e-05, "__label__fashion_beauty": 9.578466415405272e-05, "__label__finance_business": 0.00018584728240966797, "__label__food_dining": 0.00026345252990722656, "__label__games": 0.0003910064697265625, "__label__hardware": 0.0024509429931640625, "__label__health": 0.0003266334533691406, "__label__history": 0.0001691579818725586, "__label__home_hobbies": 6.42538070678711e-05, "__label__industrial": 0.0004274845123291016, "__label__literature": 0.0001500844955444336, "__label__politics": 0.00017309188842773438, "__label__religion": 0.0003476142883300781, "__label__science_tech": 0.0311126708984375, "__label__social_life": 4.595518112182617e-05, "__label__software": 0.0165557861328125, "__label__software_dev": 0.9453125, "__label__sports_fitness": 0.00019276142120361328, "__label__transportation": 0.0003786087036132813, "__label__travel": 0.00015175342559814453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21604, 0.00853]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21604, 0.48219]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21604, 0.93848]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 0, null], [0, 0, null], [0, 763, false], [763, 3920, null], [3920, 5827, null], [5827, 8385, null], [8385, 11722, null], [11722, 15054, null], [15054, 18446, null], [18446, 21604, null], [21604, 21604, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 0, null], [0, 0, null], [0, 763, true], [763, 3920, null], [3920, 5827, null], [5827, 8385, null], [8385, 11722, null], [11722, 15054, null], [15054, 18446, null], [18446, 21604, null], [21604, 21604, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 21604, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21604, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21604, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21604, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21604, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21604, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21604, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21604, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21604, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21604, null]], "pdf_page_numbers": [[0, 0, 1], [0, 0, 2], [0, 0, 3], [0, 763, 4], [763, 3920, 5], [3920, 5827, 6], [5827, 8385, 7], [8385, 11722, 8], [11722, 15054, 9], [15054, 18446, 10], [18446, 21604, 11], [21604, 21604, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21604, 0.19847]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
99e5dd1d75b1c2d157fd02f617033320a0970541
A Logic Programming Based Framework for Security Protocol Verification Shijing Wang and Yan Zhang Intelligent Systems Laboratory School of Computing and Mathematics University of Western Sydney Penrith South DC, NSW 1797, Australia E-mail: {shwang, yan}@scm.uws.edu.au Abstract. Security protocol analysis has been a major research topic in information security and recognised to be a notoriously hard problem. In this paper, we take the advantage of answer set programming technology to develop an effective framework to verify security protocols carrying claimed security proof under adversary models on computational complexity theory. In our approach, a security protocol, adversary actions and attacks can be formally specified within a unified logic program. Then the verification is performed in an automatic manner by computing the stable models of the underlying logic program. We use Boyd-González Nieto conference key agreement protocol as our case study protocol to demonstrate the effectiveness and efficiency of our approach. 1 Introduction In recent years, security protocols are increasingly being used in many diverse secure electronic communications and electronic commerce applications. However, despite an enormous amount of research effort expended in design and analysis of such protocols, it is still notoriously hard. When security protocols are designed by hand, errors may creep in by combining protocols actions in ways not foreseen by the designer [3]. Some protocols have been found errors after they were published many years, even since they have been proven secure [10]. The study of cryptographic protocols has led to the dichotomization of cryptographic protocol analysis techniques between the formal methods approach and the computational complexity approach [8]. The formal methods approach is to use logic based methods including model checking and theorem proving to automatically verify a protocol. The computational complexity approach, on the other hand, adopts a reductive process which allows a proven reduction from the problem of breaking the protocol to another problem believed to be hard. These two approaches have been developed in two mostly different communities. Recently, some research works have been done to bridge the gap between them, which achieve automatic provability under classical computational models, see [2, 4] for example. In this paper, based on the answer set programming approach [5], we propose a framework to analyze security protocols that are found insecure against certain types of attacks. We use Boyd-González Nieto conference key agreement protocol as our case study protocol to demonstrate our approach. 2 Logic Programming Specification for Security Protocols Modelling security protocols Now we present how to model a security protocol through the case study protocol (refer to the appendix A for the complete specification program). For specification simplicity and efficiency, we simplify the case study protocol to a two-party protocol showed in Figure 2 as explained in [9]. Because in a protocol flow of Figure 1, message 1 and 2 can be sent concurrently, in the simplified protocol, we merge them into one message. \[ \begin{align*} 1. \; U_1 \rightarrow U_2 : & \; U = \{U_1, U_2\}, S_{\delta_{U_1}}(U, \{N_1\}_{e_{U_1}}), \{N_1\}_{e_{U_2}} \\ 2. \; U_2 \rightarrow U_1 : & \; U_2, N_2 \\ & \; SK_{U_1} = H(N_1 || N_2) = SK_{U_2} \end{align*} \] **Fig. 1.** Simplified Boyd-González Nieto Conference Key Agreement Protocol Let \( U = \{U_1, U_2\} \). The initiator, \( U_1 \) encrypts \( N_1 \) using the public key of \( U_2 \), signs \( U \) and the encrypted nonce \( \{N_1\}_{e_{U_2}} \), and broadcasts \( U \), the signature value and the encrypted nonce in message flow 1. The principal, \( U_2 \), upon receiving the initiate message, will respond with his/her identity and a random nonce in message flow 2. The first part of protocol specification is to set up principals and their keys through predicates, \( \text{player}(A) \), \( \text{agent}(A) \), \( \text{ag}.\text{id}(A,N) \), and \( \text{key}(K) \), where \( K \) is one of key functions. For instance, in our case study protocol, we have \[ \begin{align*} \text{player}(u_1), \text{player}(u_2), & \; \text{adversary}(a) \\ \text{ag}.\text{id}(u_1,0), \text{ag}.\text{id}(u_2,1), \text{ag}.\text{id}(a,2) \\ \text{key}(p\text{Key}(A)) & \; \leftarrow \text{agent}(A). \\ \text{key}(s\text{Key}(A)) & \; \leftarrow \text{agent}(A). \\ \text{key}(\text{sig}.s\text{Key}(A)) & \; \leftarrow \text{agent}(A). \\ \text{key}(\text{sig}.v\text{Key}(A)) & \; \leftarrow \text{agent}(A). \end{align*} \] The second part is to model relationships between keys of principals. In the case study protocol, there are encryption and signature keys which are specified as follows. \[ \begin{align*} \text{asym\text{KeyPair}}(p\text{Key}(A), s\text{Key}(A)) & \; \leftarrow \text{agent}(A). \\ \text{asym\text{KeyPair}}(\text{sig}.s\text{Key}(A), \text{sig}.v\text{Key}(A)) & \; \leftarrow \text{agent}(A). \\ \text{asym\text{KeyPair}}(K_1, K_2) & \; \leftarrow \text{asym\text{KeyPair}}(K_2, K_1). \end{align*} \] The third part is about message flows in a protocol. During a protocol run, we assume that if a principal \( A \) sends a message to \( B \) and the adversary does not intercept it, \( B \) will receive it at the next time. We model the assumption using the rule: \[ \begin{align*} gets(B, M, P, T + 1) & \; \leftarrow \text{sends}(A, B, M, P, T), \\ \text{neg}(A, B, \text{not intercept}(a, M, P, T + 1)). \end{align*} \] A protocol consists of a sequence of messages. Except the first message which is sent by the initiator of the protocol run, principals will check preconditions before they send a response message. As explained in [3], a protocol was denoted like: \[ A \rightarrow B_{i} : m_{i}, p_{i} \quad \% \text{first message } A \text{ must send}, \ldots \\ B_{i} \rightarrow A : m_{i}, p_{i} \quad \% \text{first message } A \text{ must receive}, \ldots \\ A \rightarrow B_{i} : m_{i}, p_{i} \quad \% \text{last message } A \text{ must send before } m, \ldots \\ B_{i} \rightarrow A : m_{i}, p_{i} \quad \% \text{last message } A \text{ must receive before } m, \ldots \\ A \rightarrow B : m, p \] As showed above, principal A will sends message \((m, p)\) to B, if we check that a sequence of messages should have been received and sent before \((m, p)\) in a correct run. We code the following rule: \[ \text{sends}(A, B, m, p, T + 1) \leftarrow \\ \text{sends}(A, B_{1}, m_{i}, p_{i}, T_{i}), \ldots, \text{sends}(A, B_{i}, m_{i}, p_{i}, T_{i}), \\ \text{gets}(A, m_{i}, p_{i}, T_{i}), \ldots, \text{gets}(A, m_{i}, p_{i}, T_{i}), T_{i} > T_{i}, \ldots, T_{i} > \\ \text{protocol-dependant literals} \\ \text{contains}(m, p, \text{msg}(.)). \] We consider preconditions for sending message \(m\) by principal \(A\) as actions that \(A\) has performed in previous steps according to the protocol run. Protocol dependent literals are usually to check the freshness of random nonces or timestamps and other conditions needed by particular protocols. Because we represent a message using a message id and type in predicate \text{sends}, we should add a fact rule, in which \text{contains} is the head to denote what the message is indeed. For instance, in our case study protocol, principal \(u_{1}\) sends an initial message to start a protocol run. We model it as the following rule. \[ \text{sends}(u_{1}, \text{all}, 0, 0, 0), \\ \text{contains}(0, 0, \text{agset}(u_{1}, u_{2})), \\ \text{contains}(0, 0, \text{sign}(\text{sig}_{\text{K}1}(u_{1}), \text{agset}(u_{1}, u_{2})||\text{enc}(\text{pK}(u_{2}), n(0))))), \\ \text{contains}(0, 0, \text{enc}(\text{pK}(u_{2}), n(0))). \] Finally, we model the principal knowledge including the principal initial knowledge base and knowledge change during the protocol run. Each principal taking part in the protocol run has an initial knowledge base such as other principals’ public keys. While sending and receiving messages, principals will hold them and derive more information by breaking or decrypting all messages for which they have a key. Their knowledge will change during the protocol run. We use predicate \text{holds} to specify principals’ knowledge. For encryption and signature keys in the case study protocol, we code initial knowledge bases for principals using following rules. \[ \text{holds}(A, \text{pK}(B), 0) \leftarrow \text{agent}(A), \text{agent}(B), \\ \text{holds}(A, \text{sig}_{\text{K}}(B), 0) \leftarrow \text{agent}(A), \text{agent}(B), \\ \text{holds}(A, \text{sk}(A), 0) \leftarrow \text{agent}(A), \\ \text{holds}(A, \text{sig}_{\text{sk}}(A), 0) \leftarrow \text{agent}(A). \] Then we write following rules to model principals’ knowledge change during the protocol run. \[ \begin{align*} holds(A, M, T) & \leftarrow gets(A, M, P, T) \\ holds(A, M, T) & \leftarrow sends(A, B, M, P, T), \\ holds(A, S, T) & \leftarrow holds(A, M, T), contains(M, P, S) \\ holds(A, S_1, T) & \leftarrow holds(A, M, T), contains(M, P, S_1 || \ldots || S_n). \end{align*} \] \[ \ldots holds(A, S_n, T) \leftarrow holds(A, M, T), contains(M, P, S_1 || \ldots || S_n). holds(A, S_1, T) \leftarrow holds(A, enc(K_1, S_1 || \ldots || S_n), T). holds(A, K_2, T_1), asymKeyPair(K_1, K_2). \] \[ \ldots holds(A, S_n, T) \leftarrow holds(A, enc(K_1, S_1 || \ldots || S_n), T), holds(A, K_2, T_1), asymKeyPair(K_1, K_2). \] **Modelling attacks** In our framework, the adversary model is closely based on Bellare-Rogaway model. If protocols with claimed security under Bellare-Rogaway model are found to be violating any of the conditions in the definition of *insecurity*, they will be insecure in Bellare-Rogaway model. Moreover, the proof of the protocol will also be invalid. Based on the definition of *insecurity*, we should model *SID*s and session keys of principals. The *SID* of a principal is the concatenation of all messages he receives and sends. We use predicate *inSidList(U, M)* to record the messages that the principal *U* receives and sends. \[ \begin{align*} inSidList(U, M) & \leftarrow sends(U, all, M, P, T). \\ inSidList(U, M) & \leftarrow gets(U, M, P, T). \end{align*} \] The following two rules specify that two principals have same *SID*s, where the first one denotes that if a message is in the session id list of principal *U_1*, and not in the session id list of principal *U_2*, *sid_neq_pair(U_1, U_2)* is true, and the second one specifies conditions which should be satisfied for two principals to have same *SID*s. \[ \begin{align*} sid_neq_pair(U_1, U_2) & \leftarrow \\ \quad inSidList(U_1, M), not inSidList(U_2, M), neq(U_1, U_2). same_sid_pair(U_1, U_2) & \leftarrow \\ \quad not sid_neq_pair(U_1, U_2), not sid_neq_pair(U_2, U_1), neq(U_1, U_2). \end{align*} \] In our case study protocol, the session key of a principal is a one-way hush function of the concatenations of random nonces of all principals taking part in the conference protocol. \[ \begin{align*} sk(A, h(n(M_1), n(M_2))) & \leftarrow holds(A, asgset(B, C), T). \\ holds(A, nonce(B, n(M_1)), T_1), holds(A, nonce(C, n(M_2)), T_2). \end{align*} \] The following rule models that principal *U_1* and *U_2* have same session keys. \[ \begin{align*} same_sk_pair(U_1, U_2) & \leftarrow \\ \quad sk(U_1, h(n(M_1), n(M_2))), sk(U_2, h(n(M_1), n(M_2))), neq(U_1, U_2). \end{align*} \] Consider the condition 1 in insecurity definition as an instance, if two non-partner oracles have the same session keys, the protocol is insecure. Here two oracles are not partners if they have different SIDs. The attack is modelled as follows: \[ \text{attack} \leftarrow \text{same_sk.pair}(U_1, U_2), \text{not same_sid.pair}(U_1, U_2). \] Note that \text{same_sk.pair}(U_1, U_2) denotes that principal \(U_1\) and \(U_2\) have same session keys and \text{same_sid.pair}(U_1, U_2) denotes that principal \(U_1\) and \(U_2\) have same SIDs. 3 Model Checking and Verification After specifying security protocols, adversary actions, and attacks using lanugage \(L_{ap}\), we merge three parts into a logic program \(P\) in which we add a constraint rule, \[ \leftarrow \text{not attack}. \] We use Smols system [11] to verify security protocols as follows: (1) through \text{lpars}, we obtain a finite ground logic program \(P^g\) from program \(P\); (2) Using \text{smols}, we compute stable models of ground program \(P^g\); (3) If no stable model exists, the attack does not exist for protocol runs up to time \(t_{max}\)^1; (4) If there is a stable model, we collect atoms representing actions, \text{send}, \text{gets} and \text{intercept} that are true in the model, from which we can find the sequence of actions that is an attack trace. 1. At time \(t_0\), initiator \(u_1\) broadcasts an initial message which has three parts: the set of principals in the protocol run, \text{agset}(u_1, u_2); the signature of the principal set and encrypted random nonce \(n(0)\) under the public key of principal \(u_2\), \(\text{sign} (\text{sig.sKey}(u_1), \text{agset}(u_1, u_2)||\text{enc} (\text{pKey}(u_2), n(0)))\); the encryption of the encrypted random nonce \(n(0), \text{enc} (\text{pKey}(u_2), n(0))\) 2. At time \(t_1\), \(A\) receives the message and intercepts it. After modifying the principal set to \(\text{agset}(a, u_2)\) and make a new signature using his own signature key, \(\text{sign} (\text{sig.sKey}(a), \text{agset}(a, u_2)||\text{enc} (\text{pKey}(u_2), n(0)))\), \(A\) fabricates a new message, and sends it to principal \(u_2\). Now \(A\) acts as an initiator and start a different session. 3. At time \(t_2\), principal \(u_2\) receives the message from \(A\) and believes that \(A\) initiates a protocol run. 4. At time \(t_3\), principal \(u_2\) broadcasts his identifier and random number. 5. At time \(t_4\), principal \(u_1\) and \(A\) receive the random nonce of principal \(u_2\). \(u_1\) believes he finishes his own session with \(u_2\), however \(u_2\) believes he is in a different session with \(A\). An attack was found in 5.460 seconds. We observe that principal \(u_1\’s SID\) is \((0,8)\) and principal \(u_2\’s SID\) is \((1,8)\). Then \(u_1\) and \(u_2\) are not partners since they do not have matching SIDs. \(u_1\) believes the session key \(SK_{u_1} = h(n(0)||n(8))\) is being shared with \(u_2\), but \(u_2\) believes the session key \(SK_{u_2} = h(n(0)||n(8)) = SK_{u_1}\) is being shared with \(A\). Although \(A\) does not know the session key as \(A\) does not know the value of \(n(0)\), he is able to send query \text{Reveal} to the session with \(u_2\) and get \(SK_{u_2} = h(n(0)||n(8))\) which is same as \(SK_{u_1}\). Our case study protocol is not secure under Bellare-Rogaway model as being claimed. ^1 \(t_{max}\) is a max time limitation set up in the logic program. 4 Conclusions In this paper, we developed a logic programming framework in which we not only use formal verification under adversary models in the computational complexity theory, but also integrate protocol analysis into the approach. As logic programming is a declarative executable approach for knowledge representation and reasoning, in our framework, we defined a security protocol specification language $L_{sp}$ under logic programming with stable model semantics which is used to specify security protocols carrying claimed security proof under adversary models. Using Smodels we are able to verify the program we have modelled. As a case study, Boyd-González Nieto conference key agreement protocol has been specified, verified using our framework. References
{"Source-Url": "http://staff.scem.uws.edu.au/~yan/papers/current/ismis08.pdf", "len_cl100k_base": 4586, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 13718, "total-output-tokens": 5826, "length": "2e12", "weborganizer": {"__label__adult": 0.00051116943359375, "__label__art_design": 0.0004127025604248047, "__label__crime_law": 0.002132415771484375, "__label__education_jobs": 0.0011615753173828125, "__label__entertainment": 0.00011998414993286131, "__label__fashion_beauty": 0.00021660327911376953, "__label__finance_business": 0.0006585121154785156, "__label__food_dining": 0.0005598068237304688, "__label__games": 0.0009026527404785156, "__label__hardware": 0.0020771026611328125, "__label__health": 0.0014247894287109375, "__label__history": 0.0003867149353027344, "__label__home_hobbies": 0.0001938343048095703, "__label__industrial": 0.001068115234375, "__label__literature": 0.0003876686096191406, "__label__politics": 0.0006003379821777344, "__label__religion": 0.000621795654296875, "__label__science_tech": 0.39990234375, "__label__social_life": 0.00015783309936523438, "__label__software": 0.01226043701171875, "__label__software_dev": 0.57275390625, "__label__sports_fitness": 0.0004439353942871094, "__label__transportation": 0.0009303092956542968, "__label__travel": 0.00022161006927490232}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 17853, 0.02043]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 17853, 0.27922]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 17853, 0.79111]], "google_gemma-3-12b-it_contains_pii": [[0, 2691, false], [2691, 5599, null], [5599, 8758, null], [8758, 11449, null], [11449, 14889, null], [14889, 17853, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2691, true], [2691, 5599, null], [5599, 8758, null], [8758, 11449, null], [11449, 14889, null], [14889, 17853, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 17853, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 17853, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 17853, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 17853, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 17853, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 17853, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 17853, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 17853, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 17853, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 17853, null]], "pdf_page_numbers": [[0, 2691, 1], [2691, 5599, 2], [5599, 8758, 3], [8758, 11449, 4], [11449, 14889, 5], [14889, 17853, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 17853, 0.0]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
a69cf43a49d11f3309c65055408192d2c5cdf99d
<table> <thead> <tr> <th>Chapter</th> <th>Title</th> <th>Page</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>Introduction</td> <td>1</td> </tr> <tr> <td>1.1</td> <td>Documentation Roadmap</td> <td>1</td> </tr> <tr> <td>2</td> <td>Installing DPDK from the Ports Collection</td> <td>3</td> </tr> <tr> <td>2.1</td> <td>Installing the DPDK FreeBSD Port</td> <td>3</td> </tr> <tr> <td>2.2</td> <td>Compiling and Running the Example Applications</td> <td>3</td> </tr> <tr> <td>3</td> <td>Compiling the DPDK Target from Source</td> <td>6</td> </tr> <tr> <td>3.1</td> <td>System Requirements</td> <td>6</td> </tr> <tr> <td>3.2</td> <td>Install the DPDK and Browse Sources</td> <td>7</td> </tr> <tr> <td>3.3</td> <td>Installation of the DPDK Target Environments</td> <td>7</td> </tr> <tr> <td>3.4</td> <td>Browsing the Installed DPDK Environment Target</td> <td>8</td> </tr> <tr> <td>3.5</td> <td>Loading the DPDK contigmem Module</td> <td>8</td> </tr> <tr> <td>3.6</td> <td>Loading the DPDK nic_uio Module</td> <td>9</td> </tr> <tr> <td>4</td> <td>Compiling and Running Sample Applications</td> <td>11</td> </tr> <tr> <td>4.1</td> <td>Compiling a Sample Application</td> <td>11</td> </tr> <tr> <td>4.2</td> <td>Running a Sample Application</td> <td>12</td> </tr> <tr> <td>4.3</td> <td>Running DPDK Applications Without Root Privileges</td> <td>13</td> </tr> <tr> <td>5</td> <td>EAL parameters</td> <td>14</td> </tr> <tr> <td>5.1</td> <td>Common EAL parameters</td> <td>14</td> </tr> <tr> <td>5.2</td> <td>FreeBSD-specific EAL parameters</td> <td>16</td> </tr> </tbody> </table> CHAPTER ONE INTRODUCTION This document contains instructions for installing and configuring the Data Plane Development Kit (DPDK) software. It is designed to get customers up and running quickly and describes how to compile and run a DPDK application in a FreeBSD application (freebsd) environment, without going deeply into detail. For a comprehensive guide to installing and using FreeBSD, the following handbook is available from the FreeBSD Documentation Project: FreeBSD Handbook. Note: The DPDK is now available as part of the FreeBSD ports collection. Installing via the ports collection infrastructure is now the recommended way to install the DPDK on FreeBSD, and is documented in the next chapter, Installing DPDK from the Ports Collection. 1.1 Documentation Roadmap The following is a list of DPDK documents in the suggested reading order: - **Release Notes**: Provides release-specific information, including supported features, limitations, fixed issues, known issues and so on. Also, provides the answers to frequently asked questions in FAQ format. - **Getting Started Guide** (this document): Describes how to install and configure the DPDK; designed to get users up and running quickly with the software. - **Programmer’s Guide**: Describes: - The software architecture and how to use it (through examples), specifically in a Linux* application (linux) environment - The content of the DPDK, the build system (including the commands that can be used in the root DPDK Makefile to build the development kit and an application) and guidelines for porting an application - Optimizations used in the software and those that should be considered for new development A glossary of terms is also provided. - **API Reference**: Provides detailed information about DPDK functions, data structures and other programming constructs. • **Sample Applications User Guide**: Describes a set of sample applications. Each chapter describes a sample application that showcases specific functionality and provides instructions on how to compile, run and use the sample application. 2.1 Installing the DPDK FreeBSD Port On a system with the ports collection installed in `/usr/ports`, the DPDK can be installed using the commands: ``` cd /usr/ports/net/dpdk make install ``` After the installation of the DPDK port, instructions will be printed on how to install the kernel modules required to use the DPDK. A more complete version of these instructions can be found in the sections `Loading the DPDK contigmem Module` and `Loading the DPDK nic_uio Module`. Normally, lines like those below would be added to the file `/boot/loader.conf`. ``` # Reserve 2 x 1G blocks of contiguous memory using contigmem driver: hw.contigmem.num_buffers=2 hw.contigmem.buffer_size=1073741824 contigmem_load="YES" # Identify NIC devices for DPDK apps to use and load nic_uio driver: hw.nic_uio.bdfs="2:0:0,2:0:1" nic_uio_load="YES" ``` 2.2 Compiling and Running the Example Applications When the DPDK has been installed from the ports collection it installs its example applications in `/usr/local/share/dpdk/examples` - also accessible via symlink as `/usr/local/share/examples/dpdk`. These examples can be compiled and run as described in `Compiling and Running Sample Applications`. In this case, the required environmental variables should be set as below: - `RTE_SDK=/usr/local/share/dpdk` - `RTE_TARGET=x86_64-native-freebsd-clang` An example application can therefore be copied to a user’s home directory and compiled and run as below: ``` export RTE_SDK=/usr/local/share/dpdk export RTE_TARGET=x86_64-native-freebsd-clang cp -r /usr/local/share/dpdk/examples/helloworld . cd helloworld/ gmake CC main.o LD helloworld INSTALL-APP helloworld INSTALL-MAP helloworld.map sudo ./build/helloworld -l 0-3 -n 2 ``` Hello from core 1 Hello from core 2 Hello from core 3 hello from core 0 **Note:** To run a DPDK process as a non-root user, adjust the permissions on the /dev/contigmem and /dev/uio device nodes as described in section *Running DPDK Applications Without Root Privileges*. **Note:** For an explanation of the command-line parameters that can be passed to an DPDK application, see section *Running a Sample Application.* 3.1 System Requirements The DPDK and its applications require the GNU make system (gmake) to build on FreeBSD. Optionally, gcc may also be used in place of clang to build the DPDK, in which case it too must be installed prior to compiling the DPDK. The installation of these tools is covered in this section. Compiling the DPDK requires the FreeBSD kernel sources, which should be included during the installation of FreeBSD on the development platform. The DPDK also requires the use of FreeBSD ports to compile and function. To use the FreeBSD ports system, it is required to update and extract the FreeBSD ports tree by issuing the following commands: ``` portsnap fetch portsnap extract ``` If the environment requires proxies for external communication, these can be set using: ``` setenv http_proxy <my_proxy_host>:<port> setenv ftp_proxy <my_proxy_host>:<port> ``` The FreeBSD ports below need to be installed prior to building the DPDK. In general these can be installed using the following set of commands: ``` cd /usr/ports/<port_location> make config-recursive make install make clean ``` Each port location can be found using: ``` whereis <port_name> ``` The ports required and their locations are as follows: - **dialog4ports**: /usr/ports/ports-mgmt/dialog4ports - **GNU make(gmake)**: /usr/ports/devel/gmake - **coreutils**: /usr/ports/sysutils/coreutils For compiling and using the DPDK with gcc, the compiler must be installed from the ports collection: • gcc: version 4.9 is recommended /usr/ports/lang/gcc49. Ensure that CPU_OPTS is selected (default is OFF). When running the make config-recursive command, a dialog may be presented to the user. For the installation of the DPDK, the default options were used. **Note:** To avoid multiple dialogs being presented to the user during make install, it is advisable before running the make install command to re-run the make config-recursive command until no more dialogs are seen. ### 3.2 Install the DPDK and Browse Sources First, uncompress the archive and move to the DPDK source directory: ``` unzip DPDK-<version>.zip cd DPDK-<version> ``` The DPDK is composed of several directories: - **lib**: Source code of DPDK libraries - **app**: Source code of DPDK applications (automatic tests) - **examples**: Source code of DPDK applications - **config, buildtools, mk**: Framework-related makefiles, scripts and configuration ### 3.3 Installation of the DPDK Target Environments The format of a DPDK target is: ``` ARCH--MACHINE--EXECENV--TOOLCHAIN ``` Where: - **ARCH** is: x86_64 - **MACHINE** is: native - **EXECENV** is: freebsd - **TOOLCHAIN** is: gcc | clang The configuration files for the DPDK targets can be found in the DPDK/config directory in the form of: ``` defconfig_ARCH--MACHINE--EXECENV--TOOLCHAIN ``` **Note:** Configuration files are provided with the RTE_MACHINE optimization level set. Within the configuration files, the RTE_MACHINE configuration value is set to native, which means that the compiled software is tuned for the platform on which it is built. For more information on this setting, and its possible values, see the DPDK Programmers Guide. To make the target, use `gmake install T=<target>`. For example to compile for FreeBSD use: gmake install T=x86_64-native-freebsd-clang **Note:** If the compiler binary to be used does not correspond to that given in the TOOLCHAIN part of the target, the compiler command may need to be explicitly specified. For example, if compiling for gcc, where the gcc binary is called gcc4.9, the command would need to be gmake install T=<target> CC=gcc4.9. ### 3.4 Browsing the Installed DPDK Environment Target Once a target is created, it contains all the libraries and header files for the DPDK environment that are required to build customer applications. In addition, the test and testpmd applications are built under the build/app directory, which may be used for testing. A kmod directory is also present that contains the kernel modules to install. ### 3.5 Loading the DPDK contigmem Module To run a DPDK application, physically contiguous memory is required. In the absence of non-transparent superpages, the included sources for the contigmem kernel module provides the ability to present contiguous blocks of memory for the DPDK to use. The contigmem module must be loaded into the running kernel before any DPDK is run. The module is found in the kmod sub-directory of the DPDK target directory. The amount of physically contiguous memory along with the number of physically contiguous blocks to be reserved by the module can be set at runtime prior to module loading using: ``` kenv hw.contigmem.num_buffers=n kenv hw.contigmem.buffer_size=m ``` The kernel environment variables can also be specified during boot by placing the following in `/boot/loader.conf`: ``` hw.contigmem.num_buffers=n hw.contigmem.buffer_size=m ``` The variables can be inspected using the following command: ``` sysctl -a hw.contigmem ``` Where `n` is the number of blocks and `m` is the size in bytes of each area of contiguous memory. A default of two buffers of size 1073741824 bytes (1 Gigabyte) each is set during module load if they are not specified in the environment. The module can then be loaded using kldload (assuming that the current directory is the DPDK target directory): ``` kldload ./kmod/contigmem.ko ``` It is advisable to include the loading of the contigmem module during the boot process to avoid issues with potential memory fragmentation during later system up time. This can be achieved by copying the module to the `/boot/kernel/` directory and placing the following into `/boot/loader.conf`: ``` contigmem_load="YES" ``` **Note:** The contigmem_load directive should be placed after any definitions of hw.contigmem.num_buffers and hw.contigmem.buffer_size if the default values are not to be used. An error such as: ```bash kldload: can't load ./x86_64-native-freebsd-gcc/kmod/contigmem.ko: Exec format error ``` is generally attributed to not having enough contiguous memory available and can be verified via dmesg or /var/log/messages: ```bash kernel: contigmalloc failed for buffer <n> ``` To avoid this error, reduce the number of buffers or the buffer size. ### 3.6 Loading the DPDK nic_uio Module After loading the contigmem module, the nic_uio module must also be loaded into the running kernel prior to running any DPDK application. This module must be loaded using the kldload command as shown below (assuming that the current directory is the DPDK target directory). ```bash kldload ./kmod/nic_uio.ko ``` **Note:** If the ports to be used are currently bound to a existing kernel driver then the `hw.nic_uio.bdfs` sysctl value will need to be set before loading the module. Setting this value is described in the next section below. Currently loaded modules can be seen by using the kldstat command and a module can be removed from the running kernel by using kldunload `<module_name>`. To load the module during boot, copy the nic_uio module to `/boot/kernel` and place the following into `/boot/loader.conf`: ```bash nic_uio_load="YES" ``` **Note:** `nic_uio_load="YES"` must appear after the contigmem_load directive, if it exists. By default, the nic_uio module will take ownership of network ports if they are recognized DPDK devices and are not owned by another module. However, since the FreeBSD kernel includes support, either built-in, or via a separate driver module, for most network card devices, it is likely that the ports to be used are already bound to a driver other than nic_uio. The following sub-section describe how to query and modify the device ownership of the ports to be used by DPDK applications. #### 3.6.1 Binding Network Ports to the nic_uio Module Device ownership can be viewed using the pciconf -l command. The example below shows four Intel® 82599 network ports under if_iixgbe module ownership. ```bash pciconf -l ix0@pci1:1:0:0: class=0x020000 card=0x00038086 chip=0x10fb8086 rev=0x01 hdr=0x00 ix1@pci1:1:0:1: class=0x020000 card=0x00038086 chip=0x10fb8086 rev=0x01 hdr=0x00 ix2@pci1:2:0:0: class=0x020000 card=0x00038086 chip=0x10fb8086 rev=0x01 hdr=0x00 ix3@pci1:2:0:1: class=0x020000 card=0x00038086 chip=0x10fb8086 rev=0x01 hdr=0x00 ``` The first column constitutes three components: 1. Device name: ixN 2. Unit name:pci0 Where no driver is associated with a device, the device name will be none. By default, the FreeBSD kernel will include built-in drivers for the most common devices; a kernel rebuild would normally be required to either remove the drivers or configure them as loadable modules. To avoid building a custom kernel, the nic_uio module can detach a network port from its current device driver. This is achieved by setting the hw.nic_uio.bdfs kernel environment variable prior to loading nic_uio, as follows: ``` hw.nic_uio.bdfs="b:d:f,b:d:f,..." ``` Where a comma separated list of selectors is set, the list must not contain any whitespace. For example to re-bind ix2@pci0:2:0:0 and ix3@pci0:2:0:1 to the nic_uio module upon loading, use the following command: ``` kenv hw.nic_uio.bdfs="2:0:0,2:0:1" ``` The variable can also be specified during boot by placing the following into /boot/loader.conf, before the previously-described nic_uio_load line - as shown: ``` hw.nic_uio.bdfs="2:0:0,2:0:1" nic_uio_load="YES" ``` ### 3.6.2 Binding Network Ports Back to their Original Kernel Driver If the original driver for a network port has been compiled into the kernel, it is necessary to reboot FreeBSD to restore the original device binding. Before doing so, update or remove the hw.nic_uio.bdfs in /boot/loader.conf. If rebinding to a driver that is a loadable module, the network port binding can be reset without rebooting. To do so, unload both the target kernel module and the nic_uio module, modify or clear the hw.nic_uio.bdfs kernel environment (kenv) value, and reload the two drivers - first the original kernel driver, and then the nic_uio driver. Note: the latter does not need to be reloaded unless there are ports that are still to be bound to it. Example commands to perform these steps are shown below: ``` kldunload nic_uio kldunload <original_driver> # To clear the value completely: kenv -u hw.nic_uio.bdfs # To update the list of ports to bind: kenv hw.nic_uio.bdfs="b:d:f,b:d:f,..." kldload <original_driver> kldload nic_uio # optional ``` ### 3.6. Loading the DPDK nic_uio Module The chapter describes how to compile and run applications in a DPDK environment. It also provides a pointer to where sample applications are stored. ### 4.1 Compiling a Sample Application Once a DPDK target environment directory has been created (such as `x86_64-native-freebsd-clang`), it contains all libraries and header files required to build an application. When compiling an application in the FreeBSD environment on the DPDK, the following variables must be exported: - **RTE_SDK** - Points to the DPDK installation directory. - **RTE_TARGET** - Points to the DPDK target environment directory. For FreeBSD, this is the `x86_64-native-freebsd-clang` or `x86_64-native-freebsd-gcc` directory. The following is an example of creating the `helloworld` application, which runs in the DPDK FreeBSD environment. While the example demonstrates compiling using gcc version 4.9, compiling with clang will be similar, except that the `CC=` parameter can probably be omitted. The `helloworld` example may be found in the `${RTE_SDK}/examples` directory. The directory contains the `main.c` file. This file, when combined with the libraries in the DPDK target environment, calls the various functions to initialize the DPDK environment, then launches an entry point (dispatch application) for each core to be utilized. By default, the binary is generated in the build directory. ```bash setenv RTE_SDK /home/user/DPDK cd $(RTE_SDK) cd examples/helloworld/ setenv RTE_SDK $HOME/DPDK setenv RTE_TARGET x86_64-native-freebsd-gcc gmake CC=gcc49 CC main.o LD helloworld INSTALL-APP helloworld INSTALL-MAP helloworld.map ls build/app helloworld helloworld.map ``` **Note:** In the above example, `helloworld` was in the directory structure of the DPDK. However, it could have been located outside the directory structure to keep the DPDK structure. intact. In the following case, the helloworld application is copied to a new directory as a new starting point. ```bash setenv RTE_SDK /home/user/DPDK cp -r $(RTE_SDK)/examples/helloworld my_rte_app cd my_rte_app/ setenv RTE_TARGET x86_64-native-freebsd-gcc gmake CC=gcc49 CC main.o LD helloworld INSTALL-APP helloworld INSTALL-MAP helloworld.map ``` ### 4.2 Running a Sample Application 1. The contigmem and nic_uio modules must be set up prior to running an application. 2. Any ports to be used by the application must be already bound to the nic_uio module, as described in section Binding Network Ports to the nic_uio Module, prior to running the application. The application is linked with the DPDK target environment’s Environment Abstraction Layer (EAL) library, which provides some options that are generic to every DPDK application. The following is the list of options that can be given to the EAL: ```bash ./rte-app -l CORELIST [-n NUM] [-b <domain:bus:devid.func>] \neg [-r NUM] [-v] [--proc-type <primary|secondary|auto>] ``` **Note:** EAL has a common interface between all operating systems and is based on the Linux notation for PCI devices. For example, a FreeBSD device selector of pci0:2:0:1 is referred to as 02:00.1 in EAL. The EAL options for FreeBSD are as follows: - `-c COREMASK` or `-l CORELIST`: A hexadecimal bit mask of the cores to run on. Note that core numbering can change between platforms and should be determined beforehand. The corelist is a list of cores to use instead of a core mask. - `-n NUM`: Number of memory channels per processor socket. - `-b <domain:bus:devid.func>`: Blacklisting of ports; prevent EAL from using specified PCI device (multiple `-b` options are allowed). - `--use-device`: Use the specified Ethernet device(s) only. Use comma-separate `[domain:]bus:devid.func` values. Cannot be used with `-b` option. - `-r NUM`: Number of memory ranks. - `-v`: Display version information on startup. - `--proc-type`: The type of process instance. - `-m MB`: Memory to allocate from hugepages, regardless of processor socket. Other options, specific to Linux and are not supported under FreeBSD are as follows: - `socket-mem`: Memory to allocate from hugepages on specific sockets. • `--huge-dir`: The directory where hugetlbfs is mounted. • `mbuf-pool-ops-name`: Pool ops name for mbuf to use. • `--file-prefix`: The prefix text used for hugepage filenames. The `−c` or `−l` option is mandatory; the others are optional. Copy the DPDK application binary to your target, then run the application as follows (assuming the platform has four memory channels, and that cores 0-3 are present and are to be used for running the application): ``` ./helloworld -l 0-3 -n 4 ``` **Note:** The `--proc-type` and `--file-prefix` EAL options are used for running multiple DPDK processes. See the “Multi-process Sample Application” chapter in the DPDK Sample Applications User Guide and the DPDK Programmers Guide for more details. 4.3 Running DPDK Applications Without Root Privileges Although applications using the DPDK use network ports and other hardware resources directly, with a number of small permission adjustments, it is possible to run these applications as a user other than “root”. To do so, the ownership, or permissions, on the following file system objects should be adjusted to ensure that the user account being used to run the DPDK application has access to them: - The userspace-io device files in `/dev`, for example, `/dev/uio0`, `/dev/uio1`, and so on - The userspace contiguous memory device: `/dev/contigmem` **Note:** Please refer to the DPDK Release Notes for supported applications. This document contains a list of all EAL parameters. These parameters can be used by any DPDK application running on FreeBSD. 5.1 Common EAL parameters The following EAL parameters are common to all platforms supported by DPDK. 5.1.1 Lcore-related options - `-c <core mask>` Set the hexadecimal bitmask of the cores to run on. - `--lcores <core map>` Map lcore set to physical cpu set The argument format is `<lcores[@cpus]>[<,lcores[@cpus]>...]` - `-l <core list>` List of cores to run on The argument format is `<c1>[-c2],[c3[-c4],...>` where `c1, c2, etc are core indexes between 0 and 128.` - `--master-lcore <core ID>` Core ID that is used as master. - `--s <service core mask>` Hexadecimal bitmask of cores to be used as service cores. Note: At a given instance only one core option `--lcores, -l or -c` can be used. 5.1.2 Device-related options - -b, --pci-blacklist <[domain:]bus:devid.func> Blacklist a PCI device to prevent EAL from using it. Multiple -b options are allowed. **Note:** PCI blacklist cannot be used with -w option. - -w, --pci-whitelist <[domain:]bus:devid.func> Add a PCI device in white list. **Note:** PCI whitelist cannot be used with -b option. - --vdev <device arguments> Add a virtual device using the format: <driver><id>[,key=val, ...] For example: --vdev 'net_pcap0,rx_pcap=input.pcap,tx_pcap=output.pcap' - -d <path to shared object or directory> Load external drivers. An argument can be a single shared object file, or a directory containing multiple driver shared objects. Multiple -d options are allowed. - --no-pci Disable PCI bus. 5.1.3 Multiprocessing-related options - --proc-type <primary|secondary|auto> Set the type of the current process. 5.1.4 Memory-related options - -n <number of channels> Set the number of memory channels to use. - -r <number of ranks> Set the number of memory ranks (auto-detected by default). - -m <megabytes> Amount of memory to preallocate at startup. - --in-memory Do not create any shared data structures and run entirely in memory. Implies --no-shconf and (if applicable) --huge-unlink. • --iova-mode <pa|va> Force IOVA mode to a specific value. 5.1.5 Debugging options • --no-shconf No shared files created (implies no secondary process support). • --no-huge Use anonymous memory instead of hugepages (implies no secondary process support). • --log-level <type:val> Specify log level for a specific component. For example: --log-level eal:8 Can be specified multiple times. 5.1.6 Other options • -h, --help Display help message listing all EAL parameters. • -v Display the version information on startup. • mbuf-pool-ops-name: Pool ops name for mbuf to use. 5.2 FreeBSD-specific EAL parameters There are currently no FreeBSD-specific EAL command-line parameters available.
{"Source-Url": "http://fast.dpdk.org/doc/pdf-guides-19.05/freebsd_gsg-19.05.pdf", "len_cl100k_base": 6330, "olmocr-version": "0.1.53", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 33225, "total-output-tokens": 7118, "length": "2e12", "weborganizer": {"__label__adult": 0.00022482872009277344, "__label__art_design": 0.00019538402557373047, "__label__crime_law": 0.00013327598571777344, "__label__education_jobs": 0.0003418922424316406, "__label__entertainment": 3.993511199951172e-05, "__label__fashion_beauty": 9.435415267944336e-05, "__label__finance_business": 0.00014591217041015625, "__label__food_dining": 0.0001652240753173828, "__label__games": 0.0007643699645996094, "__label__hardware": 0.0030384063720703125, "__label__health": 0.00012409687042236328, "__label__history": 0.0001143813133239746, "__label__home_hobbies": 8.744001388549805e-05, "__label__industrial": 0.00035881996154785156, "__label__literature": 9.113550186157228e-05, "__label__politics": 9.459257125854492e-05, "__label__religion": 0.0002734661102294922, "__label__science_tech": 0.007579803466796875, "__label__social_life": 3.892183303833008e-05, "__label__software": 0.02911376953125, "__label__software_dev": 0.95654296875, "__label__sports_fitness": 0.00017654895782470703, "__label__transportation": 0.0002570152282714844, "__label__travel": 0.0001055002212524414}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24538, 0.0204]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24538, 0.46277]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24538, 0.80116]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 1349, false], [1349, 3206, null], [3206, 3447, null], [3447, 4792, null], [4792, 5241, null], [5241, 5608, null], [5608, 7092, null], [7092, 8874, null], [8874, 11410, null], [11410, 13916, null], [13916, 16157, null], [16157, 18016, null], [18016, 20272, null], [20272, 21699, null], [21699, 22545, null], [22545, 23828, null], [23828, 24538, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 1349, true], [1349, 3206, null], [3206, 3447, null], [3447, 4792, null], [4792, 5241, null], [5241, 5608, null], [5608, 7092, null], [7092, 8874, null], [8874, 11410, null], [11410, 13916, null], [13916, 16157, null], [16157, 18016, null], [18016, 20272, null], [20272, 21699, null], [21699, 22545, null], [22545, 23828, null], [23828, 24538, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 24538, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24538, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24538, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24538, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24538, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24538, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24538, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24538, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24538, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24538, null]], "pdf_page_numbers": [[0, 0, 1], [0, 1349, 2], [1349, 3206, 3], [3206, 3447, 4], [3447, 4792, 5], [4792, 5241, 6], [5241, 5608, 7], [5608, 7092, 8], [7092, 8874, 9], [8874, 11410, 10], [11410, 13916, 11], [13916, 16157, 12], [16157, 18016, 13], [18016, 20272, 14], [20272, 21699, 15], [21699, 22545, 16], [22545, 23828, 17], [23828, 24538, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24538, 0.05722]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
bfa1c633899612455bb4a6990437056634fe6d36
Design and Implementation of a Backward-In-Time Debugger Christoph Hofer, Marcus Denker Software Composition Group University of Bern, Switzerland www.iam.unibe.ch/~scg Stéphane Ducasse LISTIC Université de Savoie, France www.listic.univ-savoie.fr Abstract: Traditional debugging and stepping execution trace are well-accepted techniques to understand deep internals about a program. However, in many cases navigating the stack trace is not enough to find bugs, since the cause of a bug is often not in the stack trace anymore and old state is lost, so out of reach from the debugger. In this paper, we present the design and implementation of a backward-in-time debugger for a dynamic language, i.e., a debugger that allows one to navigate back the history of the application. We present the design and implementation of a backward-in-time debugger called UNSTUCK and show our solution to key implementation challenges. 1 Introduction Debuggers offer the ability to stop a program at a chosen place, either due to an error or an explicit request (breakpoint). They provide the current states of the involved objects together with a stack trace. However, while stepping through the code is a powerful technique to get a deep understanding of a certain functionality [DDN02], in many cases this information is not enough to find bugs. The programmer is often forced to build new hypotheses about the possible cause of the bugs, set new breakpoints and restart the program to find the source of the problem. Often several iterations are necessary and it may be difficult to recreate the exact same context [LHS99]. The questions a programmer has are often: “where was this variable set?” “why is this object reference nil?” or “what was the previous state of that object?”. A static debugger cannot answer these questions, since it has only access to the current execution stack. There is no possibility to backtrack the state of an object or to find out why especially this object was passed to a method. The Omniscient Debugger is a first attempt to answer these problems [Lew03], however it is limited to Java and instrumentation is done at bytecode load time. To understand the challenges faced by building a backward in time debugger, i.e., a debugger that allows one to query the state history of a program, we developed a backward in time debugger in Squeak called UNSTUCK. For its implementation we collect rich information about the program execution in terms of events, which are used to recreate the state of objects at particular points in time. The contributions of this paper are: - A model for a back-in-time debugger. - A user interface to present and query the massive amount of data generated by the recording of all the objects states. - An implementation for Squeak Smalltalk The paper is organized as follows. Section 2 shows the problems of conventional debuggers. Section 3 then presents trace based debugging. After a short overview of the implementation in Section 5, we present related work in Section 7. Finally, we conclude in Section 8 with an overview of future work. 2 Why Stack Trace is Not Enough After an error occurred a standard debugger shows the current stack. The problem is that only methods which have not yet been executed are on the stack, those that have finished execution are no longer available. 2.1 A Simple Example The following example demonstrates the problem: there is a class Foo with two instance variables var1 and var2 and the following methods: ```smalltalk Foo>>start self beforeBar. self bar. self moreBar. Foo>>initialize var1 := 0. var2 := ".". Foo>>beforeBar var1 = 0 ifTrue: [var2 := nil]. Foo>>bar | tmp | tmp := 0. (var1 to: 10) do: [:each | tmp := tmp + each ]. self var1: tmp. Foo>>moreBar var2 size > 0 ``` Accessor methods are defined for \texttt{var1} and \texttt{var2}. \texttt{Foo new} start starts the program execution. The debugger comes up because of an error, \texttt{var2} is \texttt{nil} in method \texttt{moreBar} (see 2.1 left). Figure 1: Left: error in the squeak Squeak debugger. Right: method calls and the resulting stack trace. Only the methods in the dashed box are in the stack trace when an error occurs in method \texttt{moreBar} In a normal debugger, we see a stack trace: only methods on the stack are shown, those methods which have been completely executed are not available anymore. Figure 2.1 right shows a complete execution trace of all methods executed. Only a small part of that (visualized with the dashed box) are part of the stack trace the debugger can show. When inspecting objects, only the current state is accessible but the old state is lost. Even when selecting a method that is not on top of the stack, the debugger does not revert but presents the same state as before. Assume the situation in Figure 2.1 right: if we select \texttt{moreBar}; or \texttt{bar};, the debugger presents the state of the program when the error occurred, even if the state has been different at the execution of these methods. Recapitulating there are mainly two issues: loss of execution trace and loss of objects old state. 2.2 The Debugging Problem The missing information on the execution trace and object state makes debugging much harder: with a debugger that is unable to provide this information, we have to work around to get to this information by running the program multiple times with different breakpoints. For our example, typical steps we have to make to find the bug might be: - The first question is “Was the initialize method executed?” A breakpoint in the initialize method and a restart of the program confirm that the variables were properly initialized. - Putting another breakpoint in the setter method of var2 does not halt (the newly restarted) program. We learned that var2 was not set to nil over the setter method. But this does not help anything in finding the bug. - Now it is not clear where the next breakpoint should go. We have to explore the code deeper. There is no simple procedure to find the next place for a breakpoint. The debugger does not offer information about the previous values of var2 nor about where they were assigned. The bug is in an already finished method and the debugger cannot jump backwards to this method. To find the bug as fast as possible the programmer should not have to think about where to put breakpoints. From this example, we see that a developer missed ways to explore the complete execution trace. He should have the possibility to explore previous states of an object, navigate through the places where variables changed their value and analyze the already executed methods. Some approaches already used execution traces of programs, but in the context of debugging procedural languages [Duc99b], or for exploring and reverse engineering object-oriented applications [CM93, LN95, RD99]. In object-oriented program debugging, query-based debugging combines conditional breakpoints with logic queries, evaluating a query-like expression is evaluated each time a conditional breakpoint is reached [LHS97, LHS99]. However, such approaches require to add clever probes in advance. The Omniscient Debugger [Lew03] is an attempt to provide the full stack to the programmer. TestLog [DGW06] uses a logic engine to query the trace of object-oriented application with the possibility to query the previous state of objects. 3 UNSTUCK: A Backward-In-Time Debugger Our solution to the problem presented before is to offer a debugger based on event traces and a specific interface to navigate backward in time. 3.1 Trace-Based Debugging One solution to provide more advanced debugging support is to keep much more information about the execution of a program. For this purpose we collect events representing runtime data. For each method we record the name, the receiver, the arguments and the return value (see Section 5). This information is completed by collecting every write access to a variable (instance and local). This means that we record every state change. The collected events are basically a data structure containing the specific runtime information. There is one event for each method executed, its returned value and for every write access to a variable. This data can answer many of the questions a programmer has during debugging, but simple navigation through this mass of data is needed. We make a trace out of a set of given classes which are interesting for the user to debug. We instrument transparently the methods of these classes to produce the needed data at runtime. Further we will refer to objects from these classes as instrumented objects. The execution trace holds a huge amount of data, thus we need a methodology for interacting with the debugger, which is described in the following sections. 3.2 User Interface for Navigating the Execution Basically the user interface of the Unstuck Debugger consists of several views on the collected data from the TraceLibrary, enhanced with search and navigation functions. In the following we describe these views. They are identified in Figure 2 with numbered black boxes. The corresponding number is specified in brackets. ![Figure 2: The user interface of the Unstuck Debugger.](image-url) Method trace (1). Each line represents a method call. The format is of the form receiver# selector(arg1, arg2, ...) -> return value. Each line is indented according to the depth of the message sends. Methods can be collapsed if they are of no interest (and of course expanded as well). Methods can be highlighted for remembering them easily. We can step through the highlighted lines. For the receiver and the return value of the selected method the object history can be viewed in the object history (4) over a context menu. This view selects always the current method in the trace. It can change due to interaction with another view, too. Object views (2). There are two views displaying objects according to the currently selected method in the method trace: one displays the receiver of the method and the temporary variables (on the left side, the first line represents the receiver, beneath the temporary variables with the variable name), the other one the passed arguments (on the right side, with the argument’s name). If an object is instrumented, it can be expanded to display the instance variables. Thus each line represents an object: these lines can be inspected or used for the object history over the context menu. If an object is an instance variable of an instrumented object, the setter methods of this instance variable and the object it belongs to can be highlighted in the method trace using the context menu. This enables to quickly see where this instance variable changed. We can step through these highlighted methods in the method trace, or use the stepping functions provided by the UI: step to the next/previous/first/last value of this instance variable to navigate through the variable’s assignments. Source code (3). This view displays the source code of the current selected method in the method trace. Here the source mapping of the events is used to highlight the current event. Normal debugging steps are provided to step through the source code (respectively through the events). The user can select source code and inspect it. He can also change manually the current focus in the execution trace. The object history can display the history for the current selected object. This view is used to program, i.e., the source code can be edited and recompiled. Object history (4). This view displays every occurrence of a user-selected object in the trace. The events are message reception by the object, object passed as argument, object state change, object’s variable assignment or object returned from a method. This is useful for backtracking an object, because if we have an occurrence in the trace, we can go backwards through the trace with this object. Back to a previous occurrence, see what happened to the object. We see where it was passed as an argument, thus we know from where it came, finally arriving at the first occurrence (normally its creation). Searching (5). This pane consists of a simple search field, where the user can query the events. The method trace (1) highlights the found events. Section 3.3.1 presents this functionality in more detail. Variable | Search domain ---|--- event | All events send | Events representing a method send return | Events representing a method’s return varAccess | Events representing a variable store (instance or local) instVarAccess | Events representing only an instance variable store tempVarAccess | Events representing only a local variable store Table 1: Predefined search variables 3.3 Additional User Interface Features We need supplementary features to locate interesting events and to mark interesting objects that will help finding bugs. In the following sections we describe the searching and coloring functions. 3.3.1 Simple Searching Searching is important and thus should be simple. This is realized in the following manner: there is only one search field where the programmer can provide a boolean expression to identify specific events. Some predefined variables are available: `event` for searching in all the trace events (variable access or message send), `send` for searching only message send events. Table 1 presents the predefined variables that the programmer can used. In the current version, it is not possible to define other variables. Appropriate accessor methods are available for the events to access the collected runtime data (as shown in Table 2). The expression is used as the selection criteria on the adequate events. The result of the search is a set of events, which are then highlighted in the method trace. The search expression is expressed in the implementation language, here Smalltalk. With this approach users are familiar with the search language, they can access the needed data using a known language. In addition, they have full access to the domain objects (via e.g., `event sender`). Thus it is easy to add methods to the domain classes to simplify the more complex queries. 3.3.2 Coloring Coloring is a useful tool for the developer: it enables tracking objects. The user can assign a color for an object in the trace. Various views (method trace, object views, object history) highlight the object with the assigned color. So it is easy to see if that object was passed as an argument, or was the receiver of a message or the instance variable of another object. For an example see Figure 2. In the method trace an object ("a RBMethodNode") is colored. The user can easily detect the object in the trace and quickly see when it was the receiver of a message, an argument or the returned value. Table 2: Some search expression examples <table> <thead> <tr> <th>Query</th> <th>Result</th> </tr> </thead> <tbody> <tr> <td><code>send selector = #foo</code></td> <td>All the executed methods named &quot;foo&quot;</td> </tr> <tr> <td><code>varAccess newValue class = Foo</code></td> <td>Every variable assignment, where the assigned object’s class is Foo</td> </tr> <tr> <td><code>return returnValue &gt; 4</code></td> <td>All returns with a return value greater than 4</td> </tr> <tr> <td><code>events isSend &amp; (event arg1 = 4) &amp; (event arguments size = 1)</code></td> <td>Only methods which have exactly one argument, which was 4</td> </tr> </tbody> </table> 4 Finding our Bug with the Unstuck Debugger Coming back to the problem we presented in Section 2, here is how we solve it with the Unstuck Debugger: - Start the Unstuck Debugger. - Select the class Foo and provide the code to start the execution (Foo new start). - The Unstuck Debugger instruments the bytecode of the methods of Foo, starts the program and collects the execution trace and presents it in the main user window. - The error is already visible and it is obvious that nil received the message `size`. - We want to see the code with the call of the message `size`, thus we step one back in the source view. The source code shows now that `var2` received the message `size`. - Select `var2` in the source code or in the object view. - Highlight the modifiers of `var2` (see Figure 3). - There are two modifiers: one in the `initialize` method and one in `beforeBar`, which is the faulty one. - Another possibility is to highlight `var2` in the object view and use the stepping functions for the modifiers. The Unstuck Debugger offers us with the information we needed: the modifiers of `var2`. They are only available because the old state and the execution trace are not lost. We do have to think about breakpoints but instead can directly navigate to the source of the bug. 5 Implementation Unstuck is implemented in Squeak, the open-source Smalltalk distribution [IKM+97]. Unstuck is based on the TraceLibrary which offers execution trace infrastructure. Basically the debugger collects the events, orders them and prepares the state reconstruction. To generate events (method invocation, variable access and method return), the methods are instrumented using \texttt{BYTE SURGERON} which is a high-level library to manipulate method bytecodes [DDT06]. Figure 4 shows the different layers. The following subsections describe each layer and how they work together. 5.1 Trace Library The TraceLibrary supports the generation of execution traces from a set of classes and the code to start the program. \texttt{BYTE SURGERON} instruments the methods of the given classes to generate the events at runtime. During the execution, a collector gathers these events and forms the program trace. 5.1.1 The Trace and Event Model A trace is composed of events, depending on the situation the events are holding different information depending of the kind of events they represent and also whether the method execution terminated or not: • An event representing a message sent describes the selector, the receiver, the arguments, as well as the definition class of the method, i.e., the class which defines the method and holds the source code. • When the method returns, a return event is generated with the returned value. • When the value of a variable (instance or temporary variable) is changed, a write access event is generated holding the variable name and the new value. Additionally every event holds a source range, the mapping between the bytecode and the source code. The depth of an event and the timestamp are added when the collector collects the events. Figure 4 shows an UML diagram of the model: a trace consists of several Events. There are specific events, which are containing different information as mentioned before. The non-specific Event class shows which information is common to all events. For optimizing the model, a tree is built: an Event belongs to one MessageSend, thus a MessageSend can have multiple Events. 5.1.2 Event Processing A collector, a TraceCollector, collects these events at runtime and processes them to define an order, to optimize the data structure and to prepare state reconstruction of the objects (receiver, variables and arguments) participating to the trace. Order Definition. The collector has the responsibility to define the order of the event it receives, events are tagged with a timestamp. The depth of the event is also calculated by collector. Data structure optimization. We create an event tree from a sequence of events, using the return events as marker of the end of a method execution. Back pointers to navigate from the subevents to the parent events are also managed as part of this process. State reconstruction preparation. The collector handles every occurrence of objects in the trace for later state reconstruction. By state reconstruction we mean the ability to reconstruct the exact state of an object at any point in time as we will explain in Section 5.1.3. We distinguish three cases for treating all the objects participating into an event (i.e., receiver, arguments, variables): objects that are instances of instrumented classes, instances of other classes and collection instances. • Instrumented objects: they do not need any special handling. The collector gets the state changes of such objects from instance variable write access events. With these changes we are able to provide the state of these objects at any time by applying the latest change before that point in time. • Non instrumented objects: to remember their state the collector copies these objects. • Collections: because a collector does not get any events for changes in a collection, a copy of a collection is saved with the current timestamp. Basically a collector creates a new collection of the same kind and processes every object of the collection as if it would be in the trace, i.e., recursively apply the same process: check if the object is instrumented or not, if it is a collection and process them as described above. To support object identity checks, each event has a pointer to the original object and in addition for non instrumented objects to their copy. When a collector gathers an event from an instance variable write access, then the new value represents a state change of an instrumented object. This change is tied with the current timestamp and the variable name to the corresponding object. This is useful for later state reconstruction, because we do not have to go through the whole trace and collect the needed changes. Note that the previous behavior is not necessary since we could walk over the trace and collect all changes belonging to an object. Here they are just ordered at runtime and acts as a cache. 5.1.3 State Reconstruction State reconstruction is the process of reverting an object’s state to any desired point in time in the trace. As explained above a collector prepared the state reconstruction. Depending on the type of objects, the reconstruction is different: • Instrumented objects: for every instance variable we take the latest change before the desired time and apply this change. The applied value is reverted to the desired time, too. • Non instrumented objects: no reconstruction needed, we just take the copy the collector has made and associated with the event. • Collections: we take the last occurrence of the collection in the trace and every object inside is reverted to the desired time. The following examples show the special handling of collections: the first example adds a collection to the receiver which is aCollection too. Let’s assume that the method addAll: is not instrumented but the expression is inside an instrumented method. ```plaintext ... aCollection addAll: anotherCollection ... ^someExpression ``` When the collector treats the `MessageSend` for the method `addAll:`, it processes the two collections (because they were involved in the method’s execution, as the receiver and as an argument). The collector creates a new collection with the current objects inside to remember which ones were in the collection at this time and handles each object inside as it would be an object in the trace (as described in Section 5.1.2). The same happens when the collector receives a Return event. To get the state of a collection right before this method was executed, we take the new collection created by the collector and reconstruct the state of the objects inside. To get the state of the collection after this method was executed, we take the second collection the collector created when it was treating the return event. This collection includes the newly added objects, thus we get the right state back. ```plaintext ... aCollection foo: otherCollection ... ~ someExpression ``` with foo defined as: ```plaintext foo: collection collection removeFirst ``` Similarly to the previous example, let’s assume that the method foo: is not instrumented. In addition, let’s assume that we have two objects inside otherCollection, one instrumented and other not. After the execution of foo:, the collector creates a new collection. The instrumented object is put in it, and a copy of the not instrumented one. Thus we remark that the otherCollection has changed. To get the state at the end of the method’s execution, we take the newly created collection and put the two following objects inside: the reverted instrumented one (by applying the latest changes) and the copy of the not instrumented object. ### 5.2 Event Gathering Using ByteSurgeon ByteSurgeon [DDT06] is a tool for transforming Smalltalk bytecode at runtime. It provides high-level abstractions, thus developers do not need to program at bytecode level. Bytecodes are low-level instructions for the Virtual Machine stack machine. ByteSurgeon can insert code before, after or for an instruction. This code is passed in form of a string of Smalltalk code. If we insert code after an instruction, it will be executed right after the execution of the instruction. Additionally ByteSurgeon provides the same functionality for methods instead of instructions. For accessing runtime information (such as the receiver of a message, the passed arguments), ByteSurgeon provides meta variables. They have a special syntax (`<meta: #var>`) and can be added to the string that represents the code to insert. ByteSurgeon provides the receiver, the arguments and the returned value of a message send, the variable name and the new value of a write access to a variable. The IRInstruction (an intermediate representation, which represents a bytecode instruction) delivers the static information, i.e., the selector and the definition class of a message send and the source range of all events. As an example we describe how to generate the method send events of a method: ByteSurgeon iterates over the IRInstructions, thus we work with an IRSend, which represents a message send at bytecode level. It provides the static information (selector, source range, definition class). BYTESURGEON provides the runtime information over meta variables (the receiver is accessible with <meta: #receiver>, the arguments with <meta: #arguments>). BYTESURGEON takes a string to insert code, thus we generate the string as follows: \[ \text{TraceCollector default take:} \\ \quad \left\{ \text{MessageSend withSelector: '}, \text{instr selector printString} \quad \text{// include the selector} \\ \quad \quad \text{' withArguments: <meta: #arguments>} \quad \text{// include the arguments} \\ \quad \quad \text{withReceiver: <meta: #receiver>} \quad \text{// include the receiver} \\ \quad \quad \text{withSourceRange: '}, \text{instr sourceRange printString} \quad \text{// include the source range} \\ \quad \quad \text{' class: '}, \text{instr superOf printString,)'} \quad \text{// include the definition class} \] This generates our needed event and the collector stores it. Then we instrument every send in a method with the following code: ``` aCompiledMethod instrument: [:instr instr isSend ifTrue: [instr insertBefore: theString]]. ``` ### 6 Evaluation and Discussion For evaluating the practicality of our Unstuck Debugger implementation, we provide three benchmarks: the simple example shown in Section 1, then a bug of the Squeak abstract syntax tree (AST) that results in a slightly larger trace. The third case study (Pier) is the trace resulting from running the tests of a larger system. We show the number of events, the slowdown compared to a simulated run in the standard Debugger and memory usage. <table> <thead> <tr> <th></th> <th>Number of events</th> <th>Slowdown</th> <th>Memory usage (kb)</th> </tr> </thead> <tbody> <tr> <td>Simple Example</td> <td>74</td> <td>6</td> <td>16</td> </tr> <tr> <td>AST Bug</td> <td>2725</td> <td>3.8</td> <td>800</td> </tr> <tr> <td>Pier Trace</td> <td>389689</td> <td>248</td> <td>88800</td> </tr> </tbody> </table> With a slowdown of 4-6 times, the program is still usable for debugging, but as soon as the traces get large, runtime degrades. Memory grows linearly, as expected. This suggests runtime as the main focus for future research (see Section 8). ### 7 Related work Whyline [KM04] implements Interrogative Debugging for Alice a 3D world [Ali]. Here the focus lies on providing an interface to ask questions such as why or why not things are happening in an Alice world. Thus this debugging facilities are totally tied to the, quite simple, domain model of Alice. Such an approach does not scale when the domain is more complex as in normal development. Visualising debuggers can work directly via instrumentation on the program being executed, or are based on post-mortem traces [CM93, LN95]. Visualisation of dynamic information is also related to our work in the sense that it is based on a program trace. DePauw et al. [DPLVW98] and Walker et al. [WMFB+98] use program events traces to visualise program execution patterns and event-based object relationships such as method invocations and object creation. Query-based debugging [LHS97, LHS99] use logic programming to express complex queries over a large number of object. Some queries are triggered at run-time while the program is running. The logic queries act as clever program probes. Here the intention is different, in our approach we navigate the history of the program. Caffeine [GDJ02] is a Java-based tool that uses the Java debugging API to capture execution events and uses a Prolog variant to express and execute queries on a dynamic trace. Caffeine does not support state history access. TestLog [DGW06] which uses a logic engine to query the trace of object-oriented applications, is much closer to Unstuck Debugger since it offers the possibility to query the previous state of objects. However, no user interface is provided. OPIUM [Duc99b] is a tool that allows a user to debug Prolog program using a set of debugging queries on event traces. Prolog is used as a base language and as meta language to reason about events. The main usage scenario of OPIUM is the implementation of a high level debugger for Prolog that allows forward navigation to the next event that satisfies a certain condition. Coca [Duc99a] supports the debugging of C programs based on events. Opium and Coca are mainly used to show the values of variables. In addition, both Opium and Coca do not support object-oriented programming. In addition, the history of object state is not available. Auguston [Aug98, Aug95] also uses a trace composed of event models and test programs. However it is based on procedural programming languages and does not take into account the specific behavioural aspects of object-oriented languages such object creation and the state of objects. Lewis in [Lew03] proposes to merge the approach of omniscient debuggers which collect all the run-time information and supports the exploration of the history and event-based tools that monitors program execution and allow queries. It however it is limited to java and instrumentation is done at bytecode load time. ZStep [LF98] provides a back-in-time debugger for Lisp. The focus of this research was to provide an environment that allows reversing both program state and the side effects of GUI output to understand the correspondence between static program code and dynamic program execution. 8 Conclusion and future work In many cases the Unstuck Debugger provides an improvement over conventional debugging: if we have a faulty value it is easy to find the place where it was set incorrectly. The bug can be chased in the program history. Such a kind of behavior is not possible in a normal debugger where this work has to be done manually using multiple restart and breakpoints. When we select a message send in the Unstuck Debugger it is like we stopped the program there with a breakpoint. We have the same information, but there is no need to put a new breakpoint for stopping the program in another situation. In the debugger we can just go there. Completed with backtracking and searching functions the debugger helps us finding bugs much faster. To have all this information available there is a price to pay: a slow down of application runtime. But this needs to be compared to the time won when finding bugs. Up until now the Unstuck Debugger has been working on limited but still challenging case studies such as debugging abstract syntax tree and compiler internals. An application to larger systems has shown that both execution speed and memory consumption need to be analyzed and improved. In addition as Smalltalk offers a dynamic programming style with on the fly recompilation, we plan to investigate if it is realistic to instrument the complete environment and be able to debug any application without having to provide a program seed. Other possible enhancements are support for threads and the ability to restart execution in the past: the Unstuck Debugger should be able restart the execution trace from any point in the past but this would require to recreate stack execution on the fly. Acknowledgments. We acknowledge the financial support of the Swiss National Science Foundation for the project “A Unified Approach to Composition and Extensibility” (SNF Project No. 200020-105091/1, Oct. 2004 - Sept. 2006) and the french ANR project “Cook: Réarchitecturisation des applications industrielles objets” (JC05 42872) References
{"Source-Url": "http://subs.emis.de/LNI/Proceedings/Proceedings88/GI-Proceedings-88-2.pdf", "len_cl100k_base": 7071, "olmocr-version": "0.1.50", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 34399, "total-output-tokens": 8978, "length": "2e12", "weborganizer": {"__label__adult": 0.0003094673156738281, "__label__art_design": 0.0002570152282714844, "__label__crime_law": 0.0002157688140869141, "__label__education_jobs": 0.0002970695495605469, "__label__entertainment": 4.595518112182617e-05, "__label__fashion_beauty": 0.00011074542999267578, "__label__finance_business": 8.499622344970703e-05, "__label__food_dining": 0.0002315044403076172, "__label__games": 0.00044798851013183594, "__label__hardware": 0.0005245208740234375, "__label__health": 0.0002409219741821289, "__label__history": 0.00014197826385498047, "__label__home_hobbies": 4.804134368896485e-05, "__label__industrial": 0.0002015829086303711, "__label__literature": 0.00016486644744873047, "__label__politics": 0.00015616416931152344, "__label__religion": 0.0003445148468017578, "__label__science_tech": 0.0028858184814453125, "__label__social_life": 5.328655242919922e-05, "__label__software": 0.00424957275390625, "__label__software_dev": 0.98828125, "__label__sports_fitness": 0.00023496150970458984, "__label__transportation": 0.00030732154846191406, "__label__travel": 0.00015556812286376953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37659, 0.02021]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37659, 0.50448]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37659, 0.89781]], "google_gemma-3-12b-it_contains_pii": [[0, 2563, false], [2563, 3839, null], [3839, 5388, null], [5388, 8102, null], [8102, 9303, null], [9303, 12414, null], [12414, 14861, null], [14861, 17236, null], [17236, 18116, null], [18116, 20745, null], [20745, 23402, null], [23402, 26051, null], [26051, 28707, null], [28707, 31772, null], [31772, 34545, null], [34545, 37659, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2563, true], [2563, 3839, null], [3839, 5388, null], [5388, 8102, null], [8102, 9303, null], [9303, 12414, null], [12414, 14861, null], [14861, 17236, null], [17236, 18116, null], [18116, 20745, null], [20745, 23402, null], [23402, 26051, null], [26051, 28707, null], [28707, 31772, null], [31772, 34545, null], [34545, 37659, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37659, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37659, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37659, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37659, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37659, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37659, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37659, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37659, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37659, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37659, null]], "pdf_page_numbers": [[0, 2563, 1], [2563, 3839, 2], [3839, 5388, 3], [5388, 8102, 4], [8102, 9303, 5], [9303, 12414, 6], [12414, 14861, 7], [14861, 17236, 8], [17236, 18116, 9], [18116, 20745, 10], [20745, 23402, 11], [23402, 26051, 12], [26051, 28707, 13], [28707, 31772, 14], [31772, 34545, 15], [34545, 37659, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37659, 0.05505]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
5e6c8c6019e4f3e997dbd3c00df1f50e5ff09fa8
[REMOVED]
{"len_cl100k_base": 6613, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 24420, "total-output-tokens": 7648, "length": "2e12", "weborganizer": {"__label__adult": 0.0002923011779785156, "__label__art_design": 0.0003745555877685547, "__label__crime_law": 0.00034546852111816406, "__label__education_jobs": 0.000934123992919922, "__label__entertainment": 5.936622619628906e-05, "__label__fashion_beauty": 0.00013315677642822266, "__label__finance_business": 0.0003554821014404297, "__label__food_dining": 0.0002818107604980469, "__label__games": 0.000385284423828125, "__label__hardware": 0.0005865097045898438, "__label__health": 0.00047969818115234375, "__label__history": 0.0002601146697998047, "__label__home_hobbies": 8.440017700195312e-05, "__label__industrial": 0.000431060791015625, "__label__literature": 0.0003046989440917969, "__label__politics": 0.0002007484436035156, "__label__religion": 0.0003726482391357422, "__label__science_tech": 0.040557861328125, "__label__social_life": 8.07642936706543e-05, "__label__software": 0.01500701904296875, "__label__software_dev": 0.9375, "__label__sports_fitness": 0.00018966197967529297, "__label__transportation": 0.0003781318664550781, "__label__travel": 0.00017631053924560547}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34276, 0.01655]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34276, 0.59184]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34276, 0.9267]], "google_gemma-3-12b-it_contains_pii": [[0, 4662, false], [4662, 10185, null], [10185, 13335, null], [13335, 18343, null], [18343, 23327, null], [23327, 28448, null], [28448, 33323, null], [33323, 34276, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4662, true], [4662, 10185, null], [10185, 13335, null], [13335, 18343, null], [18343, 23327, null], [23327, 28448, null], [28448, 33323, null], [33323, 34276, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34276, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34276, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34276, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34276, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34276, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34276, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34276, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34276, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34276, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34276, null]], "pdf_page_numbers": [[0, 4662, 1], [4662, 10185, 2], [10185, 13335, 3], [13335, 18343, 4], [18343, 23327, 5], [23327, 28448, 6], [28448, 33323, 7], [33323, 34276, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34276, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
1854c919a68b869e7365ba0f8cb5e1989d0e6ef1
PDF hosted at the Radboud Repository of the Radboud University Nijmegen The following full text is a preprint version which may differ from the publisher's version. For additional information about this publication click this link. http://hdl.handle.net/2066/176173 Please be advised that this information was generated on 2018-09-07 and may be subject to change. Towards a Two-Dimensional Framework for User Models P.t. de Vrieze\(^1\), P. van Bommel\(^1\), J. Klok\(^2\), and Th. van der Weide\(^1\) \(^1\) University of Nijmegen \(^2\) Océ Research & Development Abstract. The focus if this paper is user modeling in the context of personalization of information systems. Such a personalization is essential to give users the feeling that the system is easily accessible. The way this adaptive personalization works is very dependent on the adaptation model that is chosen. We introduce a generic two-dimensional classification framework for user modeling systems. This enables us to clarify existing as well as new applications in the area of user modeling. In order to illustrate our framework we evaluate push and pull based user modeling. 1 Introduction The research area of user modeling seeks to enhance human computer interaction by adapting the system to the user. This topic has already gained attention by various authors, see: [1], [2], [3], [4]. User modeling involves the use of incremental behaviour analysis for acquiring user models. It also involves adaptation of the system behaviour to the user model. For a background on system adaptation we refer to: [3], [5], [6]. The key part of a user modeling system is the user model. In order to know what a user model should look like it is necessary to know the adaptation methods that are going to be employed. The methods that do this are described in the adaptation model. This is a general model that describes how the user models need to be created, maintained and used. We distinguish two kinds of adaptation models: a push adaptation model and a pull adaptation model. Those models are based on the direction of inference in the system. Further it is possible to combine both models into a hybrid adaptation model that combines aspects of both models. An example of a hybrid system can be found in. While publications have described the use of both kinds of models and combinations of them, they have not explicitly evaluated the advantages and disadvantages of those models. We believe that this is important to be able to design user modeling systems better. In this paper we analyse the differences between the push and pull adaptation models. For that it is important to first define what a user modeling system actually is, and which parts of a system can be seen as a part of the adaptation system. For that reason we give an overview of user modeling systems in section 2. After that we will introduce a list of demands that user modeling system should satisfy. This list is then used in sections 5, 6, and 7 to evaluate the push, pull and hybrid adaptation models. Finally, in section 8 we will evaluate our framework and state possible points of further research. 2 Overview of User Modeling Systems A user modeling system is a system that shows adaptive behaviour concerning its interaction with the user. For explaining the difference between conventional systems, i.e. interactive systems that do not employ user modeling, (see figure 1(a)) and user modeling systems (see figure 1(b)) we first need to describe conventional systems in a suitable way. Then we need to describe user modeling systems, and compare them. In the next two sections we will describe both conventional and user modeling systems. Conventional interactive systems (see figure 1(a)) can be seen as state machines that interacts with a user. This interaction is handled by a user interface. Each user action can induce a state change, after which new user actions are possible. In designing a user interface several choices have to be made concerning the looks and behaviour of the interface. Many of these choices are implicit or given by default choices from guidelines. For the sake of being able to compare... a conventional system with a user modeling system we assume that the choices are explicit. We call those choices interface properties. The interface properties determine both the behaviour and looks of the user interface. In a conventional system user actions induce events. These events trigger system actions and interface changes. These actions and interface changes can differ based on the interface properties. In a system based on user modeling (see figure 1(b)), the behaviour of the various handlers may be affected by user properties in addition to the handler specific properties. See e.g. [4] and [7] for systems that show such a change of behaviour. Those user properties are supplied by the adaptation system. The user properties can be seen as questions asked by the system about a specific user property. As the adaptation system can be seen as the authority on the user, the questions should be in such a way that all inference happens inside the adaptation system. As a consequence of the user properties influencing the handlers the user interface now takes into account the user model as its behaviour is determined by the user interface handler. The same goes for the action handler. The user properties are provided by the adaptation handler. The adaptation handler generates these properties based on events fed to it by the event handler. The main point of user modeling is about how to go from these events to the user properties. 3 Further Analysis of User Modeling Systems To evaluate user modeling systems it is very useful to have a clear method for comparing them. For this purpose we have developed a two-dimensional classification framework. Our framework looks at all kinds of user modeling systems and is not made by classification of existing systems. In this it differs significantly from the framework in [8]. Figure 2 presents the proposed framework. Along the horizontal axis is the inference process. It goes from the event model to the user model, and from the user model to the system concept model. The event model consists of the actual events generated by the system. The user model of the most system independent user properties, and the system concept model consists of all the user questions that can be asked by the system. For certain user properties many derivation steps are necessary, and for others only a few. Because of this reason we model the progress in that process, not the steps. Further we define the model that is least system specific to be in the middle. For that reason all systems will have their highest point in the middle. On the vertical axis we model system independence. At the start of the adaptation process, there are events generated by the system. These events are maximally system dependent. An example of such an event could be: “The user fills box 123 with a purple background”. We call the model here the event model. For adaptation purposes the events generated by the system are not that relevant. An adaptation system wants to use specific cases to infer knowledge of the general case. This inference process goes in a number of steps. At one point a model is inferred that is most general. An example of knowledge that can be inferred here is: “The users favourite colour is purple”. This is part of what we call the user model. At a point where the user model is known, the system needs to know how this model fits into the questions a user modeling system might have. A user modeling system wants to know the answer on a question like: “What background color should a new box have?” . In the adaptation phase of the system, the adaptation system will try to get system dependent answers based on the general knowledge from the user model. The model of answers to system questions is called the system concept model. The system concept model is where the user properties live. We can use the framework of figure 2 to determine two properties of systems. Firstly, we can look at the hight of the triangle to determine how system specific an adaptation system is. For example in figure 3 we see the systems S2 and S4. S2 is more system independent than S4. This could mean that S2 can be more easily be extended to provide more or different adaptation. The second property we can distinguish is, where in the inference process a persistent model is stored. This is an important measure as the process is different before and after storage. Before storage a push process needs to be used to create the model. Push here means that the arrival of an event generates a waterfall of subsequent events that lead to updating the persistent model. We call this push adaptation. We will discuss the advantages and disadvantages of push based systems in section 5. After storage we need to use a pull strategy to perform adaptation. This starts with the system requesting the value of a certain property from the adaptation system. For determining the value of this property the adaptation system might want to use the values of other properties that might also need to be calculated. This goes on until the persistent model is used. We call this pull adaptation. Fig. 3. Use of the two-dimensional classification framework As an example of the use of the framework we look at figure 3. In figure 3 there are six systems with all different properties. System S1 is almost a purely pull-based system, as its persistent model is created very early on in the inference process, while S5 is can be classified as a hybrid system and S6 is a rule-based system. The other systems are all different kinds of hybrid systems. Note that S5 is almost in the middle, but a system completely in the middle would be rather unrealistic. Based on the locations of the systems in figure 3 we can say things about the systems, and especially their relations with each other. As an example looking at systems S3 and S5 we can say that system S3 has a bias on pull modeling compared to S5 and that S3 is more system dependent than S5. This can be used to say things about these systems like: “the persistent model of S3 is probably relatively bigger than the persistent model of S5”, “It is probably more easy to extend the adaptation system of S5 than to extend that of S3”, and “The persistent model of S5 is less system dependent than that of S3”. 4 Properties of a User Modeling System In the framework from section 3 we saw that there is push adaptation and pull adaptation. In the coming sections we want to analyse the advantages and disadvantages of these adaptation strategies. To make an analysis we have identified a number of key properties of user modelling systems. Although some of these properties are not easily measured, we still believe they are important. - **Adaptability.** The user should be able to manually adapt his model to a certain extend. - **Speed.** The users’ perception of the system’s speed should not decrease. - **Extensibility.** The system should be extensible while retaining the existing knowledge about its users. - **Model size.** The model size should not grow too large. - **Analysis possibilities.** The chosen kind of adaptation model should allow for all kinds of analysis techniques. – Privacy. The system should be designed in such a way as to guarantee the highest possible level of privacy for the users. Some of these properties are more important than others. It mainly depends on the application. We will not further discuss privacy as it depends mostly upon the application and very little on the adaptation model. 5 Push Adaptation Models Push adaptation models are adaptation models, that let events propagate on to the values of a user model. Many systems that use push adaptation models use a rule-based model as employed in [9]. This paper describes the adaptation system of the AHA! system, a research system for creating adaptive hypermedia. These rule-based models are based on Active Database technology and as such inherit limitations from database systems. ![Fig. 4. A push adaptation model](image) There are several points to ECA rules. There is the possibility of endless recursion. Also there needs to be made a choice of techniques of achieving confluence. It should not be possible that equal starting models and equal events lead to different final user models. One advantage of push adaptation is the fact that the contents of the user model are well aggregated. This has as advantage that those contents can be easily understood. Another advantage is that the relative size of the user model stays small, and that the size does not change during regular use of the system. This does however impede the possibilities for basing values of newly introduced attributes upon already seen behaviour of the user. In this section we evaluate push based adaptation models based on the points from section 4. – Adaptability. Because the user model stores end values it will be fairly easy for users to adapt the model to their wishes as the results of their changes are obvious and local. There could be too many possibilities for changes though. – **Speed.** Provided that the amount of rules stays within limits there are no serious speed issues with push adaptation models. – **Extensibility.** Push adaptation models are similar to database theory, and are often based on it. They have one problem that is similar to the problem of databases. Database systems are not good in data model change. This is the same with rule based adaptation models. At a moment that the adaptation model changes, values for new properties need to be calculated which can be expensive in terms of time. – **Model size.** A push adaptation model has a user model with a limited size. This is because events are aggregated into the user model at the moment they happen. – **Analysis possibilities.** The fact that event aggregation in rule based adaptation models happens at the moment the events happen makes it hard to impossible to perform time based analysis on user actions. Also aging (as weighing recent events higher than older events) is hard to implement. From this point by point overview we can see that push adaptation models are especially good in the areas of model size and complexity. The weakest points lay in extensibility of the model. Push adaptation models are very popular within the domain of educational systems. Those systems can be characterised by the fact that the user properties that need to be modelled are often (static/discrete/...). Push adaptation models are used in other system to though. Examples of push adaptation models can be found in: [9],[10],[11],[8]. ### 6 Pull Adaptation Models Pull adaptation models perform adaptation from a different direction than push models. In the extremity a pull adaptation model records all events in the user model. High level attributes are then derived based on lower level attributes and querying of the event record. ![Diagram of Pull Adaptation Model](image) **Fig. 5.** A pull adaptation model The pull model is based on calculation at the moment of the request. As such extension of the adaptation model is a lot easier than with push models. One problem with the functional model though is the fact that the recorded data has very little value on itself. For adaptation purposes one would prefer to know concepts of user behaviour, not individual events. Push adaptation makes sure that concept generation needs to be done only once. Certain concept generation rules might be quite complex and would take a long time to recalculate on every use. To allow this for the pull model caching could be very helpful. - **Adaptability.** Pull models have problems with adaptability. This is caused by the fact that user models store huge amounts of abstract facts. One can not expect even experts to be able to make changes with predictable results in such a user model. An exception to this is that exclusion of time periods is easy in pull models. All events have a timestamp, and removal of facts just leads to different results of the functions. - **Speed.** As user models that store events can get very big there is certainly the need to use extensive caching of intermediate results. The language used to query the user model could provide tools for incremental queries, where old results get enhanced with newer facts. Also the set of matching events can be stored to be used as a base for the query at a later time. - **Extensibility.** The pull adaptation model scores very well on the point of extensibility. As abstract events are stored there will be many cases where new user attributes can be derived from behaviour before the attribute was introduced. - **Model size.** Model size is a disadvantage of the pull adaptation model. With a little loss on model quality though old events could be aggregated into smaller parts or even discarded. If the amount of users of the system is not very high we don’t believe there is a big problem on model size. - **Analysis possibilities.** The pull adaptation model allows for more analysis possibilities. As all data in the user model is time stamped, time based analysis and aging are easy performed. There are no analysis possibilities in the push model that are not available in a pull model. Pull based adaptation models are currently not common. They are especially utilised in cases where combinations of events need to be analysed to retrieve the goals of a user. A pull based adaptation model is for example used in [12]. In this article the interaction of users with a word processor is studied. This interaction is used to make recommendations to the user on doing things more efficient. Another example of pull models are attentive systems. They need to determine whether a user can be disturbed. These systems are highly dynamic and thus do not fit well with the static nature of the push model. Examples of these systems can be found in [13]. Other pull systems can be found in: [7], [14] and [15]. ### 7 Hybrid Adaptation Models Both adaptation models have their advantages and disadvantages. The push model for example might need workarounds for ages (as being dynamic properties changing every second). The pull model is not very good at storing static user properties, and can be very space inefficient. Looking at the two phases of the user modeling process we can see that while the model use phase is especially suited for a pull approach, the modeling phase is more directed towards a push approach. We can use this by using a hybrid adaptation model. Such a hybrid model can combine the advantages of both pure models. Basically the push model has a place in the user modelling phase and the pull model in the adaptation phase. - **Adaptability.** By storing system independent user properties the hybrid system can offer the user clear high-level properties the user can change (not properties that are either too abstract events with unclear results (pull), or many system specific properties which have too localised results (push). This could mean that the adaptability of a hybrid system is better than both the rule based and functional approaches. This adaptability advantage could vanish if the rule based and functional models offer adaptability of intermediate concepts that are at the same position as the user properties of the hybrid model. - **Speed.** Hybrid adaptation models should relieve many of the possible speed problems in the functional model as it can reduce the complexity of the event store in the functional model. It also avoids the rule explosion that comes with a big interrelated push model. - **Extensibility.** The modeling process goes from very system specific events to less system dependent concepts. Those system independent concepts can be building blocks for extension. System dependent events cannot really do that. So there is no real loss in extensibility when using a hybrid model where concepts are stored that are less system dependent. - **Model size.** In the hybrid model the model size can be significantly lower than the pull model as not single events are stored, but more high-level concepts. - **Analysis possibilities.** As hybrid adaptation models allow for different adaptation strategies for different properties, they can retain most of the analysis possibilities that function-based adaptation models have. At the same time hybrid adaptation models can take advantage of properties of rule-based adaptation models where the analysis possibilities offered by a function-based approach is not necessary. Hybrid adaptation models are more common than one would expect. They can often be found in systems where no special effort was put to the adaptation model. One area where they are almost unavoidable is the area of recommender systems. These systems tend to be focused on document–user matching techniques. Many of these systems make a single “user model” out of the event history of the user-system interaction (push). Those user models are then used at query time to make a rank of different recommendations (pull). Examples of recommender systems can be found in: [8],[16] 8 Conclusion In this paper we have introduced a framework for classifying user modeling systems. With this framework we have shown that there are two basic categories of adaptation: rule-based adaptation and function-based adaptation. We have pointed out several examples of such systems. Besides the rule-based and function based systems there is also a possibility for hybrid systems. We believe these hybrid systems can be able to solve the problems with both pure approaches, and combine their strong points. We also pointed out that user modeling systems can have differing system dependence. This system dependence measure can be an indication of ease of extensibility of the system. References
{"Source-Url": "https://repository.ubn.ru.nl/bitstream/handle/2066/176173/176173.pdf?sequence=1", "len_cl100k_base": 4358, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 22394, "total-output-tokens": 5545, "length": "2e12", "weborganizer": {"__label__adult": 0.0003886222839355469, "__label__art_design": 0.0015125274658203125, "__label__crime_law": 0.00037598609924316406, "__label__education_jobs": 0.006343841552734375, "__label__entertainment": 0.00017392635345458984, "__label__fashion_beauty": 0.00023472309112548828, "__label__finance_business": 0.0006427764892578125, "__label__food_dining": 0.0004475116729736328, "__label__games": 0.0006275177001953125, "__label__hardware": 0.0008816719055175781, "__label__health": 0.0007839202880859375, "__label__history": 0.0005631446838378906, "__label__home_hobbies": 0.00013339519500732422, "__label__industrial": 0.0004699230194091797, "__label__literature": 0.0008769035339355469, "__label__politics": 0.0003402233123779297, "__label__religion": 0.00046443939208984375, "__label__science_tech": 0.164794921875, "__label__social_life": 0.00017952919006347656, "__label__software": 0.03875732421875, "__label__software_dev": 0.77978515625, "__label__sports_fitness": 0.0002727508544921875, "__label__transportation": 0.0005526542663574219, "__label__travel": 0.0002646446228027344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24336, 0.02066]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24336, 0.14312]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24336, 0.9439]], "google_gemma-3-12b-it_contains_pii": [[0, 367, false], [367, 2783, null], [2783, 4187, null], [4187, 7249, null], [7249, 9338, null], [9338, 11391, null], [11391, 13278, null], [13278, 15350, null], [15350, 18356, null], [18356, 21331, null], [21331, 24336, null]], "google_gemma-3-12b-it_is_public_document": [[0, 367, true], [367, 2783, null], [2783, 4187, null], [4187, 7249, null], [7249, 9338, null], [9338, 11391, null], [11391, 13278, null], [13278, 15350, null], [15350, 18356, null], [18356, 21331, null], [21331, 24336, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24336, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24336, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24336, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24336, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24336, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24336, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24336, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24336, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24336, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24336, null]], "pdf_page_numbers": [[0, 367, 1], [367, 2783, 2], [2783, 4187, 3], [4187, 7249, 4], [7249, 9338, 5], [9338, 11391, 6], [11391, 13278, 7], [13278, 15350, 8], [15350, 18356, 9], [18356, 21331, 10], [21331, 24336, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24336, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
48d96b47e0e168e750ba8b7f6a65e2cc6f9607a0
TAGGINGSENSE: Method Based On Sensemaking For Object-Oriented Source Code Comprehension Daniel Schreiber Post Graduate Program in Informatics Pontifícia Universidade Católica do Paraná – PUCPR Curitiba, Brazil xiraba@gmail.com André Menolli Computer Science Department Universidade Estadual do Norte do Paraná - UENP Bandeirantes, Brazil menolli@uenp.edu.br Sheila Reinehr, Andreia Malucelli Post Graduate Program in Informatics Pontifícia Universidade Católica do Paraná – PUCPR Curitiba, Brazil sheila.reinehr@pucpr.br, malu@ppgia.pucpr.br Abstract— All software requires maintenance, either for error correction or for implementing updates. However, maintenance is often complex and expensive, and one of the main problems in the high cost of maintenance is the difficulty of understanding the source code of other authors. Thus, this research presents TaggingSense, a method based on sensemaking that aims to reduce object-oriented source code comprehension time on systems maintenance. Through experimentation, it was possible to observe knowledge extracted from the source code, processing, and sharing, to be positively assisted in the source code maintenance and comprehension process, thus bringing benefits such as reduction time spent, quality, and greater security in the changes made. Keywords-knowledge;sensemaking;source code maintenance;ontology. I. INTRODUCTION Software maintenance is one of the activities that consume substantial resources in software projects. In the mid-1980s, the total cost invested in maintenance and improvement accounted for over 60% of the total cost of software systems [1]. In contrast, in the 2000s, total maintenance cost exceeded more than 90% [2]. Maintenance is inevitable because we must ensure updated and efficient software, and this activity is performed for various reasons, such as changes in requirements, bug fixes, component modifications, software improvement, source code optimization, and efficiency improvement, among others [3]. Among several proposed techniques and processes to improve software maintenance, some studies explore cognitive aspects related to software comprehension. With source code being the main maintenance component, comprehension is the predominant factor for providing effective software maintenance, thus allowing the development of computerized systems [4]. Software comprehension corresponds to activities that people perform in order to understand, conceptualize, and reason about software [5]. It is estimated that developers dedicate an average of 40% to 90% of the maintenance effort to the software comprehension process [6] [7]. One of the possible reasons for difficulty in source code comprehension is the lack of knowledge by people without experience, as well as by programmers from other fields. One method to build knowledge and make sense of things is through sensemaking. Sensemaking is the process of turning circumstances into situations that can be comprehended explicitly in words, and that serves as a catalyst for actions [8]. Weick [8] considers labeling (assigning explicit names) an essential step in sensemaking. In maintenance activities, it is in the analysis and comprehension stage that those involved do work to extract knowledge and use it to continue with maintenance. During this activity, the acquired knowledge is conserved in people's memories, and such knowledge is divided into two classes: syntactic and semantic [9]. Both semantic and syntactic knowledge are directly and indirectly related to source code comprehension. Many studies and models of comprehension identified different types of knowledge, including knowledge of programming, knowledge of real-world situations addressed by software, and knowledge of the application domain [10]. After comprehension, the coding activity, a process through which developers declare their intentions for the computer, is performed. This activity implies high processing power and storage in the memory of people, because, in addition to the domain, developers need to visualize the organization of objects and routines, as well as the data flow [11]. These challenges, coupled with the effort applied to maintenance and the absence of an ideal solution to these problems, led to the development of this research. It is believed that a comprehension method applied to the source code related to the extraction and dissemination of knowledge can assist in the comprehension process, thus reducing uncertainties and the time dedicated to maintenance tasks. Therefore, this study aims to develop a method based in sensemaking to reduce object-oriented source code comprehension time on system maintenance. More specifically, it is intended to answer the following question: Is it possible to reduce the time and effort of source code comprehension, and thus increase the quality and efficiency of software maintenance? II. RELATED LITERATURE Of all the activities involved in the process of maintenance, comprehension is the most important, as it is considered to be the essential basis for modifying a software product [12]. DOI reference number: 10.18293/SEKE2015-038 show that efforts applied on maintenance are mainly targeted to the comprehension part [11]. Several works were developed related to software maintenance and comprehension, not all of which are focused on serving the same purpose. However, these studies use similar techniques for working on the source code. For example, in research [10], the complexity of understanding a program at the time of maintenance was studied for the purpose of calculations and estimates of effort metrics. Work [13] identified two levels of comprehension: syntactic and semantic. The work proposed in [14], by means of cataloging source code, already seeks to discover programmers’ knowledge on application domain. [12] explored a method for maintaining software engineering artifacts "connected" through semantic connections, starting from the source code, by means of ontologies. Work [15] proposes a union of the ontology of code knowledge with domain knowledge, and lastly, work [16] developed source code and documentation ontology to assist in the comprehension process through complex searches inferred on ontology populated from text mining applied in the source code. The use of ontologies has been significantly explored in software maintenance activities for much of the works highlighted here. Among the techniques for applying ontology to the source code, this paper proposes a new approach: using ontology as a consequence of the knowledge extracted from the source code by using the sensemaking technique. Based on sensemaking, we propose the development and implementation of a method with the principle of formalizing and implementing a folksonomy within the source code, so that it is possible to extract knowledge and maintain it in a knowledge base, with the goal of extracting and disseminating both domain knowledge and the features contained in the source code. III. TAGGINGSENSE METHOD In this section, we present the “TaggingSense” method, which supports the steps and the intrinsic processes involved in source code comprehension during software maintenance. This method combines the tagging concepts of folksonomy, and the stages and processes identified by sensemaking, with the goal of accelerating and improving the comprehension process of unknown source codes. A. Method Structure From the eight stages of sensemaking (Organizes Flux, Noticing and Bracketing, Labeling, Retrospective, Presumption, Social and Systemic, Action, and Organizing through Communication) conceived by [8], four activities have been defined for the proposed method, and are described as follows: - Observation: consists of the superficial analysis that a programmer performs when starting the maintenance activity. Owing to such observations, in this activity, the programmer formulates ideas and structures based on the experience of past projects. - Extraction: activity related to the extraction and development of knowledge contained in the source code. This activity starts sensemaking. Knowledge is formalized and archived by the programmer with the source code. - Organization: organizing and structuring the extracted knowledge. This activity consists of provided support and the support or rejection of raised ideas and hypotheses in order to improve knowledge structuring. It is in this activity that the programmer identifies phenomena and observed patterns, improves externalization, and catalogs the acquired knowledge. - Collaboration: the main component of this activity is communication. In this activity, the sharing and development of knowledge with the group of people involved in the process occurs through the exchange of experience and the refinement of learning. Based on the activities defined, details of the method and the steps involved in each activity are described in Table I. In total, four activities were created with a subtotal of 15 interrelated steps. Each activity has a purpose that serves as input to generate a specific output. The outputs generated by the activities are: (i) formulation and structuring of ideas and hypotheses (observation activity); ideas are formulated and structured tacitly, where externalization occurs in the execution of the next stage; (ii) formalized knowledge (extraction activity): transformation of tacit knowledge into explicit; (iii) restructured and organized knowledge (organization activity): this activity organizes knowledge in a structured way, and enriches existing knowledge with more information; and (iv) knowledge base (collaboration activity): this is the location where all knowledge extracted from the source by one or more programmers is stored. B. Knowledge Representation The TaggingSense method proposes the use of a folksonomy-based ontology for organizing and managing tags. In [27], the authors defined folksonomy as the result of a personal free marking (tag) of information and objects for retrieval. The use of tags through folksonomy fits best to factor a demonstration of human thought, compared with those methods related to automatic extraction of text [17]. Through a manual process, the user develops source code sensemaking and identifies a topic/knowledge through tagging. In this process, folksonomy is the result of the sensemaking process designed by the user. One of the strengths of folksonomy is the free assignment of words to features. Annotating a feature with multiple keywords requires less cognitive effort than selecting a single category [18]. Folksonomy is represented through ontology, which serves as basis for supporting the processes. This helps to solve the main problems of folksonomy, such as synonyms, ambiguities, and searches. The main ontologies developed to support the tagging process were evaluated, such as Newman [19], SCOT [20], MOAT [21], Knerr [22], and NAO [23]. After analyzing the available ontologies, the ontology of Knerr [22] was chosen, due to its better compliance with the requirements of the problem, its availability, and easy access to documentation of their classes and properties. TABLE I. TAGGINGSENSE METHOD ACTIVITIES AND STEPS <table> <thead> <tr> <th>Activities</th> <th>Steps</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><strong>Observation</strong></td> <td>Structure analysis</td> <td>Preliminary study of the structuring of the source code.</td> </tr> <tr> <td></td> <td>Technical knowledge and domain search</td> <td>Improvement of technical knowledge in relation to the source code structure, such as programming language, paradigms, architecture, and standards, in addition to complementary studies related to the domain.</td> </tr> <tr> <td></td> <td>Knowledge extraction</td> <td>Development of domain concepts. Assimilation between domain issues and technical issues related to the source code.</td> </tr> <tr> <td></td> <td>Tagging</td> <td>Marking source code through tags. Use of folksonomy to assist, support, and organize tags created during knowledge extraction.</td> </tr> <tr> <td></td> <td>Externalization</td> <td>Knowledge articulation occurs, i.e., transformation of tacit knowledge into explicit or usable knowledge. This task represents the continued task of Tagging.</td> </tr> <tr> <td></td> <td>Guides</td> <td>Improvement of source code tagging. Tagging is structured in a way that helps programmers find such markings in the source code through waypoints.</td> </tr> <tr> <td></td> <td>Enrichment tags</td> <td>Development of new concepts related to those already developed and identified by means of tags.</td> </tr> <tr> <td><strong>Extraction</strong></td> <td>Knowledge refinement</td> <td>Refinement of points related to application domain. Reevaluation and continuation of “Knowledge extraction” task of previous activity.</td> </tr> <tr> <td></td> <td>Tag re-evaluation</td> <td>Importance validation with project and domain. Redundant tags are eliminated; common tags are reused in the project.</td> </tr> <tr> <td></td> <td>Cataloging standardization</td> <td>Standardization between the terms already created.</td> </tr> <tr> <td></td> <td>Release</td> <td>All new created tags have visibility property set to private because they are developed at this stage and can change or be eliminated by the creator.</td> </tr> <tr> <td></td> <td>Storage</td> <td>Throughout the process, extracted knowledge is stored in a database called knowledge base, through ontologies meaning.</td> </tr> <tr> <td></td> <td>Sharing</td> <td>Database must be shared with everyone specifically involved in the process of project maintenance.</td> </tr> <tr> <td></td> <td>Reuse</td> <td>Reuse of tags created by other programmers.</td> </tr> </tbody> </table> **IV. TAGGINGSENSE ENVIRONMENT** To support the TaggingSense method, we implemented an environment to allow tagging the source code in order to assist in its comprehension. The tagging process consists of manually extracting source code knowledge, and adding it in the folksonomy ontology. This information corresponds to keywords for the tag, date, time, and creator, in addition to the class, method, variable, or related code snippet, that can be inserted to the same tag created for other individuals. This environment was built as a plug-in for the Eclipse development IDE (integrated development environment). In this environment, interaction starts from the programmer’s comprehension of the source code from the bottom to the top (“bottom-up”) of the source code lines that represent the domain knowledge, through the identification of relevant chunks. Chunks are code portions that programmers can recognize. Large chunks contain several smaller chunks [16]. After this step, it is necessary for the source code to be processed and synchronized with the source code ontology. In the environment, SCRO (Source-code Ontology) is used as the source code ontology because it was created to support the main tasks of software comprehension through the explicit representation of conceptual knowledge found in the source code [24]. This synchronization consists of the extraction of information from the project’s source code, such as methods, input and output values of each method, attributes, and classes, and population of the source code ontology. Once the source code ontology is populated, the next step is to interact with the folksonomy ontology. This allows new individuals created in this ontology through the creation of tags by the programmer to be associated according to the instances of individuals of the source code ontology. Lastly, the process results in a knowledge base that contains all created tags and their respective associations, derived from the domain knowledge received from the source code. The knowledge base consists of the very folksonomy ontology populated and inferred by inference mechanisms. The environment implementation is presented in the next subsection. **A. Environment Implementation** The environment was implemented according to the assumptions of sensemaking, folksonomy, and knowledge base. In addition, six implementation requirements were raised to support the source code comprehension process. They are: - **Requirement 1**: query and record domain information in the folksonomy ontology. Information refers to the knowledge acquired during the comprehension process, and should be semantically linked to allow queries and inferences (reasoning). - **Requirement 2**: populating source code ontology. The plug-in must provide a method for extracting semantic information from the source code and automatically populating the source code ontology. - **Requirement 3**: populating folksonomy ontology. Populating the domain ontology, which corresponds to the tags created, must be performed manually. As a result of sensemaking, the source code comprehension process is best developed manually because it is at this moment that the user assimilates and understands the source code. - **Requirement 4**: searches of instances in the ontology. - **Requirement 5**: allows to create, connect, provide, identify, query, and share tags during the source code comprehension process. • Requirement 6: integration with the working environment. In order to automatically extract the source code and allow direct interaction with the user, the system was designed and developed based on Eclipse 3.6 and Java 6 platform. The source code ontology is automatically populated by the plugin, through QDox library [25], whereas the tags manipulation is manual, according to user action. The source code is the only input software artifact, whereas the remaining entries in the system are through manual intervention. Queries by created and populated tags occur through SPARQL-DL with OWL-API support library because there was no native support for SPARQL queries during the development of this research. Based on the requirements for extraction and manipulation of gathered knowledge, the TaggingSense plug-in was developed to manipulate ontologies and tagging in the source code, with the following functionalities: (i) Display tags related to the selected code: from a window, it is possible to analyze the relationship between the programming-related object and the associated domain concept (tag); (ii) Display tags in tree format: from the list of tags, it is possible to find the source code related to the selected tag; and (iv) Display use of all tags: list all public tags created by any person, in addition to private tags authored by the current user. In addition to the features described, the plug-in allows the addition of new tags and makes the tags public, thus allowing other users to view the tags and use them collaboratively. V. EXPERIMENT To evaluate the feasibility of the method and the environment, an experiment was proposed with the goal of answering the initial question of this research: Is it possible to reduce the time and effort of source code comprehension, and thus increase the quality and efficiency of software maintenance? To evaluate the experiment, three criteria were defined: (i) programmer behavior: evaluation based on observations from an expert who accompanied the experiment; (ii) development time: this was considered a metric to measure method efficiency; (iii) quality of maintenance performed: an assessment as to whether the requested improvements were implemented as expected. To conduct the experiment, four IT professionals, who work in a midsize software company, were selected. The selected professionals belong to two distinct classifications: junior, professionals with less than five years of experience in OOP (Object-Oriented Programming), software architectures, design patterns, organization and best coding practices; and senior, programmers with equals or more than five years of experience in system development with knowledge of working on large, complex projects. The participants were requested to make two improvements to an existing system that was unknown to them. The system consisted of a salesforce automation project developed in Java language for mobile devices. Its initial release was designed to run on PALM OS, Windows Mobile, and Android devices. The experiment was divided into three parts, each part containing a specific purpose and applied to specific participants, as summarized in Table II. In addition, a maximum execution time for each maintenance task was stipulated. <table> <thead> <tr> <th>TABLE II. EXPERIMENT DESCRIPTION</th> </tr> </thead> <tbody> <tr> <td>Participants</td> </tr> <tr> <td>----------------</td> </tr> <tr> <td><strong>Experiment 1</strong></td> </tr> <tr> <td>Junior A</td> </tr> <tr> <td>Senior A</td> </tr> <tr> <td>Evaluation</td> </tr> <tr> <td>Improvement time and location</td> </tr> <tr> <td><strong>Experiment 2</strong></td> </tr> <tr> <td>Junior B</td> </tr> <tr> <td>Senior B</td> </tr> <tr> <td>Evaluation</td> </tr> <tr> <td>Improvement time and location; name and number of new tags created during the process.</td> </tr> <tr> <td><strong>Experiment 3</strong></td> </tr> <tr> <td>Junior B</td> </tr> <tr> <td>Senior B</td> </tr> <tr> <td>Evaluation</td> </tr> <tr> <td>Maintenance time and quality; number of new tags created.</td> </tr> </tbody> </table> A. Results Analysis of the results was performed mainly in a qualitatively manner. In this analysis, the purpose of the experiments was considered, and the experiments were designed so that a comparison could be made, as described in Table III. In experiment 1, senior participant A showed difficulties when attempting to find the location (class/method) that caused the parameter to perform the validation requested for this experiment. However, he was able to perform the experiment successfully in 16 minutes, and executed the maintenance in the expected class and method. Junior participant A could not find the correct location of the maintenance in the stipulated time. Even after being shown the location where the maintenance should be performed, the participant failed to complete the task successfully within the stipulated time because, although the maintenance was performed correctly, the code was not implemented in the expected method. In experiment 2, junior participant B did not use the plug-in as a support tool and could not find the correct method where the improvement should be implemented. Senior participant B achieved this improvement in 12 minutes, and did not need to receive any type of help or advice. However, neither senior participant B nor junior participant B implemented an improvement on the desired method and class. In experiment 3, participants had access to the tags. Junior participant B started the maintenance using the available tags. Through the tags, the class attribute that had the value that needed to be changed was easily deduced. After the locating task was performed all locations that called the attribute in question were searched by the programmer in the source code. Every item in each code snippet that was located was verified against the related tag. Junior participant B performed the activity in merely eight minutes, without any type of help or support. Compared with senior participant A who ran the same maintenance in experiment 1 without the aid of tags, junior participant B was faster because senior participant A performed the same maintenance in 16 minutes. In turn, senior participant B, who had access to the tags, implemented the proposed improvement in four minutes; half the time displayed by junior participant B. Table IV presents a summary of the maintenance time required by senior and junior programmers. ### TABLE IV. COMPARISON BETWEEN TIME OF SAME MAINTENANCE WITH AND WITHOUT TAG <table> <thead> <tr> <th>Participant</th> <th>Without tags</th> <th>With tags</th> </tr> </thead> <tbody> <tr> <td>Junior Group</td> <td>30 min</td> <td>8 min</td> </tr> <tr> <td>Senior Group</td> <td>16 min</td> <td>4 min</td> </tr> </tbody> </table> VI. DISCUSSION In experiments 1 and 2, tag features to be used in the comprehension process were not available to programmers. However, for experiment 3, the tags were made available to assist in the comprehension process. From the results, it can be concluded that sensemaking development is influenced heavily by the availability of features. The group of junior programmers who did not use tags required an average of 30 minutes to perform the proposed maintenance. However, through the tags, this time decreased to eight minutes, demonstrating a 74% productivity improvement in performance. In the same sense, the senior group performed the same maintenance in 12 minutes, whereas by means of tags, this time decreased to four minutes, showing a gain of 75% for this class of developers. In experiment 2, wherein the tags were not available, but the possibility of creating and using them was offered, only the group of senior participants benefitted. However, the tags created were used as waypoints (identification of locations), and as memorization topics that were extracted from the source code. Thus, the created tags helped in source code navigation, assisting developers to locate code among the many classes and methods, avoiding them to get lost on source code navigation. In contrast, in the experiment where the tags were already created and available, only the group of juniors added a new tag. The new tag served the same objective as for the other group, that is, as a waypoint. We can conclude that in unfamiliar environments, extracting source code knowledge is easier for more experienced developers precisely because they have more experience. It was also observed that in environments where knowledge of the code was already present, senior programmers did not process new knowledge, whereas junior programmers were led by the existing tags, and even added a new related tag. The failure to process new knowledge puts in evidence the conclusion of the study by [26], which showed that there is no interest on the part of software engineers to study application domain knowledge when performing specific maintenance, where only knowledge related to software engineering (programming, development environment, and application implementation) are considered. The authors in [26] concluded that developers cultivate past knowledge, and searching for new knowledge is a costly process that is performed only when there is a clear need for the programmer and there is no easier alternative. According to [26], software engineers attempt to understand only what is necessary for a system to solve the current problem, and then tend to forget the details of what they learned. Senior programmers in experiment 2 showed an average of 60% higher performance in the same experiment performed by the group of junior programmers. In this experiment, only the senior programmers used the feature for extracting knowledge from the source code. This justifies the fact that sensemaking is best developed when there already exist foundations and past experience [8]. However, as already discussed, in an environment where the knowledge contained in the source code is extracted previously by an expert with greater knowledge, and is made available via tags for those with less experience, a significant gain in performance is demonstrated. Thus, we can conclude that the proposed method for extracting and sharing knowledge of the source code is sufficiently effective for improving overall performance of the development team. VII. CONCLUSIONS The software maintenance field is complex, mainly because it is dependent on a source code comprehension process, an activity that involves greater cognitive effort of the people involved. Several studies have been developed to facilitate code comprehension. However, this process can still be improved. Knowledge extracted directly from the source code through sensemaking is rich in important and valuable details that can be applied to source code comprehension. This knowledge can be best utilized when stored by means of ontologies and disseminated to more people using Semantic Web. With this. process, the knowledge can not only be extracted, but also shared with those involved, thus benefitting the entire team. Through the results of our experiments, we demonstrated that the proposed TaggingSense method is viable because we were able to conclude that knowledge extraction, processing, and sharing assists positively in the process of source code maintenance and comprehension, thus obtaining benefits such as reduced time, increased quality, and greater security in the changes made. We also showed that our proposed method can guide programmers to the exact location of the improvement required, thus causing maintenance to not occur in wrong places that could affect the quality of the program or open the possibility for security breaches. Thus, the main issue of this research could be answered: it is possible to reduce the time and effort for source code comprehension during maintenance. However, we plan to extend the study to a larger number of participants. We also intend to evaluate the reaction of programmers with different educational backgrounds, as well as evaluate the question of the impact of personal and organizational culture and customs. REFERENCES
{"Source-Url": "https://ksiresearchorg.ipage.com/seke/seke15paper/seke15paper_38.pdf", "len_cl100k_base": 5826, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 21013, "total-output-tokens": 7616, "length": "2e12", "weborganizer": {"__label__adult": 0.00035381317138671875, "__label__art_design": 0.000263214111328125, "__label__crime_law": 0.0002772808074951172, "__label__education_jobs": 0.0008320808410644531, "__label__entertainment": 4.392862319946289e-05, "__label__fashion_beauty": 0.00013017654418945312, "__label__finance_business": 0.00016164779663085938, "__label__food_dining": 0.00025653839111328125, "__label__games": 0.0003941059112548828, "__label__hardware": 0.0004851818084716797, "__label__health": 0.0003437995910644531, "__label__history": 0.00013208389282226562, "__label__home_hobbies": 7.092952728271484e-05, "__label__industrial": 0.00022804737091064453, "__label__literature": 0.00020575523376464844, "__label__politics": 0.00013840198516845703, "__label__religion": 0.0003147125244140625, "__label__science_tech": 0.00403594970703125, "__label__social_life": 9.1552734375e-05, "__label__software": 0.004512786865234375, "__label__software_dev": 0.98583984375, "__label__sports_fitness": 0.00024211406707763672, "__label__transportation": 0.00032019615173339844, "__label__travel": 0.00015544891357421875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34666, 0.01683]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34666, 0.55325]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34666, 0.92263]], "google_gemma-3-12b-it_contains_pii": [[0, 5159, false], [5159, 11194, null], [11194, 17305, null], [17305, 23302, null], [23302, 28752, null], [28752, 34666, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5159, true], [5159, 11194, null], [11194, 17305, null], [17305, 23302, null], [23302, 28752, null], [28752, 34666, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34666, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34666, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34666, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34666, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34666, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34666, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34666, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34666, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34666, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34666, null]], "pdf_page_numbers": [[0, 5159, 1], [5159, 11194, 2], [11194, 17305, 3], [17305, 23302, 4], [23302, 28752, 5], [28752, 34666, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34666, 0.25658]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
c5bce123c9f55dc0d75c6b6a5cdb1902401e23dd
[REMOVED]
{"Source-Url": "https://www.uow.edu.au/~hoa/papers/truong-wesoa13.pdf", "len_cl100k_base": 7035, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 32151, "total-output-tokens": 8027, "length": "2e12", "weborganizer": {"__label__adult": 0.00028014183044433594, "__label__art_design": 0.00028228759765625, "__label__crime_law": 0.0002448558807373047, "__label__education_jobs": 0.0008702278137207031, "__label__entertainment": 6.198883056640625e-05, "__label__fashion_beauty": 0.0001322031021118164, "__label__finance_business": 0.00041794776916503906, "__label__food_dining": 0.0002884864807128906, "__label__games": 0.0003948211669921875, "__label__hardware": 0.0009179115295410156, "__label__health": 0.0004611015319824219, "__label__history": 0.00023484230041503904, "__label__home_hobbies": 8.83340835571289e-05, "__label__industrial": 0.00035858154296875, "__label__literature": 0.00024580955505371094, "__label__politics": 0.0002267360687255859, "__label__religion": 0.0003604888916015625, "__label__science_tech": 0.033721923828125, "__label__social_life": 9.328126907348631e-05, "__label__software": 0.00827789306640625, "__label__software_dev": 0.951171875, "__label__sports_fitness": 0.00021851062774658203, "__label__transportation": 0.0005173683166503906, "__label__travel": 0.00019216537475585935}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33445, 0.00929]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33445, 0.54849]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33445, 0.88833]], "google_gemma-3-12b-it_contains_pii": [[0, 2801, false], [2801, 6209, null], [6209, 9665, null], [9665, 11625, null], [11625, 14920, null], [14920, 18005, null], [18005, 21059, null], [21059, 23766, null], [23766, 26534, null], [26534, 28766, null], [28766, 30536, null], [30536, 33445, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2801, true], [2801, 6209, null], [6209, 9665, null], [9665, 11625, null], [11625, 14920, null], [14920, 18005, null], [18005, 21059, null], [21059, 23766, null], [23766, 26534, null], [26534, 28766, null], [28766, 30536, null], [30536, 33445, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33445, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33445, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33445, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33445, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33445, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33445, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33445, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33445, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33445, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33445, null]], "pdf_page_numbers": [[0, 2801, 1], [2801, 6209, 2], [6209, 9665, 3], [9665, 11625, 4], [11625, 14920, 5], [14920, 18005, 6], [18005, 21059, 7], [21059, 23766, 8], [23766, 26534, 9], [26534, 28766, 10], [28766, 30536, 11], [30536, 33445, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33445, 0.10945]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
9ef606aa298512015da85d56c5e1d37c8b983a45
Research Article Enhancing E-Health Information Systems with Agent Technology Minh Tuan Nguyen, Patrik Fuhrer, and Jacques Pasquier-Rocha Department of Computer Science, University of Fribourg, 1700 Fribourg, Switzerland Correspondence should be addressed to Patrik Fuhrer, patrik.fuhrer@unifr.ch Received 30 April 2008; Accepted 1 September 2008 Recommended by Yang Xiao Agent Technology is an emerging and promising research area in software technology, which increasingly contributes to the development of value-added information systems for large healthcare organizations. Through the MediMAS prototype, resulting from a case study conducted at a local Swiss hospital, this paper aims at presenting the advantages of reinforcing such a complex E-health man-machine information organization with software agents. The latter will work on behalf of human agents, taking care of routine tasks, and thus increasing the speed, the systematic, and ultimately the reliability of the information exchanges. We further claim that the modeling of the software agent layer can be methodically derived from the actual "classical" laboratory organization and practices, as well as seamlessly integrated with the existing information system. Copyright © 2009 Minh Tuan Nguyen et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. 1. Introduction The business of today’s complex organizations such as hospitals in a healthcare network relies on sophisticated information systems which often inherit many weaknesses from the past. For instance, due to its lack of flexibility, a legacy information system cannot integrate the ever-increasing requirements in order to assist the users or to free them from many routine tasks. (A legacy information system represents a massive, long-term investment in the past [1], with poor system quality, design, and architecture. It is costly to adapt to rapidly changing business requirements.) This weakness of legacy information systems is one of many aspects of the “automation gap.” Another major weakness relates to the increasing physical mobility of users. Many legacy information systems are designed for users working at fixed client workstations in fixed offices. They do not take into account recent advances in mobile technology such as PDAs, mobile phones, and smartphones. In many legacy information systems, the information flow still requires human interaction between actors either face-to-face or through the plain old telephone communication system to get things done (information delivery, alert sending, people search, feedback, etc.). Automation gap, lack of mobility, and direct human interaction result in an inefficient information flow and data processing: (i) nonautomated information search and retrieval are time-consuming; (ii) errors may occur in data transmission by humans; (iii) users must be physically present at either end of the communication link to successfully establish a conversation (i.e., only synchronous interaction); (iv) the lack of a systematic activity log makes it difficult to determine the responsibilities of actors when problems or errors occur during a business process. This research aims at applying a systematic agent technology approach to overcome these weaknesses. The design of a software agent layer on top of a legacy information system offers many advantages to users: (i) it adds interesting properties to the information system: ubiquitousness, intelligence, scalability, systematic management, logging of the information flows, and so forth; (ii) it helps humans to interact efficiently among themselves and with the information system. Indeed, human effort and time can be saved by transferring routine tasks from humans to software. After this first introductory part, Section 2 provides background information on software agents, agents platforms, and development methodologies in general. Section 3 presents a case study conducted at the HCF Laboratory (HCF is the French acronym for Hospital of the state of Fibourg, Switzerland). This section is further divided as follows: (i) the mission and the information system of the HCF Laboratory are presented; (ii) the weaknesses and potential problems of the current information system are identified; (iii) finally, a software agent-based solution to enhance the system is proposed. Section 4 focuses on the medical multiagent system (MediMAS) prototype, which represents our first implementation of the proposed agent-based solution. It simulates an end user’s (lab personnel, physician) point of view by considering software agents as personal assistants and by showing them in action. Section 5 shows how it was possible to define the requirements and to sketch the architecture of the prototype using a well-defined and systematic approach, and this section also briefly describes its main components. Finally, Section 6 concludes this paper by summarizing the main achievements of our work and by discussing some extensions and improvements planned for the future. 2. Background It is out of the scope of this paper to offer full background information on software agents and their related technologies. Therefore, the three next subsections only provide a short introduction to the domain and refer the interested reader to the abundant literature for further details. 2.1. What is an Agent? The term “agent” appears in a wide spectrum of research areas such as economics, physics, biology, mathematics, artificial intelligence, and software engineering. Therefore, a unified notion of agent is difficult to extract from the research literature. In this section, we do not aim to coin a new definition, but to highlight the fundamental properties of agents from two published definitions. Definition 1. An autonomous agent is a system situated within and a part of an environment that senses that environment and acts on it, over time, in pursuit of its own agenda and so as to effect what it senses in the future [2]. Definition 2. An agent is a small, autonomous, or semi-autonomous software program that performs a set of specialized functions to meet a specific set of goals, and then provides its results to a customer (e.g., human end-user, another program) in a format readily acceptable by that customer [3]. The first definition proposes the most general notion of agent which may be a person, a robot, a piece of software, and so forth. The second definition focuses on agents in the software domain which is of interest to us. Both definitions exhibit the following basic properties of software agents: (i) autonomy: agents have some degree of control over their actions and can work without intervention of humans; (ii) social ability: agents can coordinate their actions and cooperate with other agents to achieve their goals, using a common language to communicate with each other; (iii) reactivity: agents can perceive their environment and respond to environmental changes; (iv) proactiveness: agents can act on their own initiative to achieve their goals instead of simply reacting with the environment. For our research purposes, we further characterize a software agent as a running program object, capable to initiate, receive, execute, or reject a message autonomously to attain its goals during its life cycle. 2.2. Agent Platforms. An agent platform is a software environment in which agents are incarnated and operate to achieve their goals. The agent platform must provide the following minimum set of functionalities [4, 5]: (i) agent management (creating, starting, removing, migrating agents, etc.), (ii) agent communication, (iii) supervision of agents, error notification, (iv) security mechanism. Today, several platforms have been developed (e.g., JADE [6], JACK [7], AgentBuilder [8], Aglet [9], etc.) and researches are being conducted to define new platforms for building agent systems. JADE was selected based on two criteria: (i) the selected platform is well-proven; (ii) it is scalable for our research and experimental purposes. Java Agent DEvelopment Framework (JADE) is a software framework fully implemented in the Java language. It simplifies the implementation of multiagent systems through a middleware that complies with the Foundation For Intelligent Physical Agents (FIPA) specifications and through a set of graphical tools that supports the debugging and deployment phases. (FIPA is an IEEE computer society standards organization that promotes agent-based technology and the interoperability of its standards with other technologies [10].) This agent platform can be distributed across machines (which do not even need to share the same OS) and the configuration can be controlled via a remote 2.3. Agent-Oriented Methodologies. The concept of agents was first introduced in the 1970’s. However, the development of agent-based systems is a relatively new domain of software engineering. Today, several agent-oriented methodologies have been developed (e.g., Gaia [13], MaSE [14], and MAS-CommonKADS [15]). They are based on different theoretical foundations [16]: artificial intelligence (AI), object-oriented Programming (OOP), combination of AI and OO, as well as $i^*$ organization modeling framework (Tropos) [17]. These methodologies contribute significantly to the rigorous and systematic development of agent-based systems. The JADE Methodology [18] is a new agent-oriented methodology that supports the ontology approach. It encompasses the analysis and design phases to develop software agents on the JADE platform. This methodology proposes to build the ontology at the end of the design phase in order to share the knowledge between software agents. 3. HCF Laboratory: Current Organization and Software Agent Solution 3.1. The HCF Laboratory. The HCF Laboratory [19] provides medical analysis ordered by hospitals in the state. The laboratory is located on several sites with different domains: haematology, immuno-haematology, chemistry, and microbiology. It receives daily hundreds of orders with specimens, analyzes the specimens, then delivers final results to the requesters (doctors, hospital departments, etc.). The method of transmission of test results depends on their urgency level. Besides the lab equipment for carrying out medical analysis, the personnel of the HCF Laboratory are supported in their daily tasks by the WinDMLAB Multisite laboratory information system [20], coupled with a traditional telephone communication system. They constitute two major components of the current HCF Laboratory Information System (cLIS). cLIS ensures the availability of medical results in a centralized database and their transmission: (i) between departments and sites of the laboratory, (ii) between the laboratory and the HCF, (iii) between the laboratory and other requesters in the province of Fribourg. Each requester (doctors, hospital departments, etc.) can access and review the test reports on their patients at any level of detail. The WinDMLAB Multisite system and the traditional telephone communication system must coexist to achieve all the functionalities as cLIS was initially designed for. Indeed, several scenarios still require the telephone communication system to get things done, for example, in the following circumstances: (i) a lab technologist calls a physician to transmit patient’s test results; (ii) a physician calls the laboratory to obtain by phone the test results; (iii) a lab technologist asks, by phone, his director to make a decision in an emergency situation, and so forth. Figure 1 illustrates cLIS as a three-layer system in which both the laboratory information system and the telephone communication system coexist: (i) the first layer defines the information system infrastructure, which is composed of servers running different operating systems and application software in a computer network; (ii) the second layer is the WinDMLAB Multisite system; (iii) the third layer provides the telephone communication system which allows requesters and laboratory staff to exchange test results via voice and fax. One can notice that human actors interact with each other directly or indirectly through the second and third layers. 3.2. Potential Problems. cLIS raises numerous potential problems [21]: (i) even though the major part of results (80%) are transferred through automats and WinDMLAB Multisite system, the quality of services provided by cLIS depends to a more or less extent on human factors, for example, any mistake of a lab technologist in transferring test results to a doctor may cause dramatic consequences on patients; (ii) cLIS does not allow the requesters to know when results become available; (iii) the processes which take place in the telephone communication system (layer 3) cannot be logged automatically in cLIS for monitoring and tracking purposes; (iv) physicians who use cLIS spend a lot of time searching, retrieving, consulting, and interchanging the test results; (v) to establish a successful phone communication, two actors must be present, therefore, time is wasted if either one cannot reach the other when needed; (vi) because of the time-consuming use of cLIS in many scenarios, physicians and laboratory personnel have less time for their real medical activities. The above-identified problems, caused by human operations, often prevent information to flow smoothly from cLIS to actors and vice versa. These problems illustrate the so-called “automation gap” [22, 23]. What is needed is a systematic, strategic approach that automates error-prone human processes. 3.3. A Software Agent Solution. The “automation gap” may be filled using different software technologies, for example, JavaSpaces with SMS message technology, Web services technology, multiagent technology, and so forth. It is out of the scope of this paper to compare these technologies. Our purpose is to propose a methodology for allowing us to migrate from the legacy human agent-centered cLIS Table 1: The three simulated specimens. <table> <thead> <tr> <th>Criticality/priority</th> <th>None</th> <th>Urgent</th> </tr> </thead> <tbody> <tr> <td>Non-critical</td> <td>nlab-007</td> <td>nlab-008</td> </tr> <tr> <td>Critical</td> <td>—</td> <td>nlab-009</td> </tr> </tbody> </table> In cLIS, actors (laboratory personnel, laboratory director, physicians, etc.) are human agents. A human agent is a professional characterized by experience, skills, intelligence, reactivity, proactiveness, and ability to work autonomously and to cooperate with other human agents. They also have weaknesses inherent to human beings. Our proposal aims at designing software agents which will work on behalf of human agents with similar characteristics. In other words, our solution delegates daily routine tasks performed by human agents to software agents. In this new approach, each actor is assigned a personalized software agent which acts as his personal assistant. We also say that the actor is an assistant’s owner. When talking about these personal assistants, we could also use the “virtual twin” metaphor [24] or consider them as avatars representing humans like in virtual worlds. The assistant receives a list of things to do from its owner, performs the assigned tasks in close cooperation with other software agents, and delivers the final result to the owner. In our solution, the software agents are designed on Layer 3, shifting the telephone communication system up to the fourth layer (cf. Figure 1). The software agent solution offers significant advantages for cLIS: (i) the features and functionalities of WinDMLAB Multisite are maintained, preserving the investment in this legacy laboratory application; (ii) in the new software agent-based cLIS, the delegation of routine tasks from human to software agents (personal assistants) allows human actors to focus their attention on specimen analysis, test result interpretation, medical decision making, and so forth; (iii) the new software agent-based cLIS, coupled with mobile devices (PDAs, mobile phones, smartphones, etc.), allows the actors to view the test results transmitted by personal assistants anywhere and anytime; (iv) all events and actions are systematically logged and centralized to support auditing of the system. Traceability and exception investigation, for example, to answer a patient’s complaint, is also improved. 4. The MediMAS Prototype The MediMAS prototype [21] is the first experimental implementation of the proposed agent-based solution. A case study was conducted at the HCF Laboratory to test it in the real world, and to explore different practical aspects. 4.1. Agents as Personal Assistants. MediMAS has six agent categories: (i) physician agents, (ii) lab personnel agents, (iii) lab director agents, (iv) alert manager agent, (v) integration agent, and (vi) audit agent. Figure 2 depicts their organization in which the agents assist different categories of humans in their daily tasks. This figure also shows the social ability of agents to cooperate with each other in order to automate the information flow between the actors themselves, as well as between the actors and the cLIS. 4.2. Software Agents in Action 4.2.1. Environment Setup. In the environment of our MediMAS prototype, the integration agent (riAgent) plays a central role. Therefore, it is launched first with the JADE platform before starting any other agent. When the setup is complete, the agents are attached to the MediMAS’s containers (a JADE container is a runtime environment for agents [25]): (i) riAgent is the integration agent, (ii) amAgent is the alert manager agent, (iii) adAgent is the audit agent, (iv) pAgents are the physician agents, (v) lpAgents are the lab personnel agents, (vi) ldAgents are the lab director agents. In the MediMAS system, each human actor (physician, lab personnel, lab director) is assigned an Agent, and simultaneously, one or more GuiAgents. For example, a single agent pAgent TuanAgent and two GuiAgents are assigned to the physician Tuan. We now setup our sample WinDMLAB database by feeding it with the fictitious test results of specimens nlab-007, nlab-008, and nlab-009 in order to simulate the three test results which are recorded into the database by the lab analysers, and validated by the lab technologist. (Our sample WinDMLAB database was developed using SQLite RDBMS [26].) Let us introduce the actors who will play different roles in our scenario: (i) Tuan is a physician in the HCF and is assigned the ID 3; (ii) Jacques is the lab director; (iii) Patrik is a lab technologist in HCF Laboratory; he is working on the specimens: nlab-007, nlab-008, and nlab-009, ordered by a caregiver Tuan. In the following scenario, starting with the notification of results availability, we study in finer detail the human actors, their assigned personal assistant agents, and their interactions. 4.2.2. Notification of Results Availability. (i) Patrik has finished the analysis of all specimens. The three test results are recorded into WinDMLAB database. Table 1 shows the priority of the specimens and their degree of criticality. (The priority of an analysis is set by its requester and the degree of criticality depends on its result and is set by the lab technologist.) (ii) At completion of the nlab-007 analysis, Patrik observes that the test results are noncritical (see Table 1). In order to notify the availability of the test results to Tuan (requester ID = 3), Patrik enters nlab-007 and clicks on the button beside the NLAB field to automatically fill in the other fields (Figure 3). Finally, Patrik clicks the “notify result” action button to direct his lpAgent to announce the availability of test results to the requester. (iii) Patrik further treats the other results in the same manner. (iv) Patrik’s lpAgent sends the announcements of the results to Tuan’s pAgent. (v) It also sends these announcements to amAgent which records the announcements and starts to monitor closely the read/unread status of the new test results. 4.2.3. Acknowledgments of Notification Receipt. (i) Concurrently with amAgent, Tuan’s pAgent receives the announcements and refreshes the list of pending results in the upper pane of its window by adding the new announcements of nlab-007, nlab-008, and nlab-009 test results, flagged as “available” in the status of announced Result column (Figure 4). (ii) Tuan clicks on the received announcement nlab-007 in the list of pending results in order to preview the details of the test results. Tuan’s pAgent requests riAgent to retrieve the contents of the nlab-007 test results and displays the contents of the nlab-007 test results in the lower pane of its window (Figure 4). (iii) Tuan clicks the “confirm” button to acknowledge receipt of the notified announcement of nlab-007 and thus directs his pAgent to send this acknowledgement to amAgent. (iv) amAgent updates the status of nlab-007 as “read” and removes the nlab-007 announcement from his own internal list. This terminates the monitoring of nlab-007 by amAgent. (v) Once the announcement is flagged as “read,” Tuan’s pAgent removes nlab-007 from the list of pending results (Figure 5). (vi) Tuan further acknowledges the nlab-008 result. One notices that, in the pAgent’s window, each announcement is first flagged as “available” during a predefined time interval, for example, 20 minutes for normal test 4.2.4. Problem Detection and Alert. (i) For the nlab-009, amAgent has not yet received an acknowledgment message from Tuan’s pAgent within the preset time interval. After three unsuccessful warnings, amAgent escalates up the organizational hierarchy by sending an alert to Jacques’ ldAgent. (ii) Jacques’ ldAgent receives the nlab-009 alert from amAgent and displays it in the ldAgent’s window (Figure 6). (iii) Jacques clicks on the nlab-009 alert in order to preview it. Jacobs’s ldAgent requests riAgent to retrieve the contents of the nlab-009 test result and displays the contents of the nlab-009 test results in the lower pane of its window. (iv) Jacques contacts Tuan to manually transmit the test results to him. ### Table 2: Tasks performed by agent categories. <table> <thead> <tr> <th>Agent categories</th> <th>Tasks</th> </tr> </thead> <tbody> <tr> <td><strong>Physician agent</strong></td> <td>Receives notification of test results availability from the lab personnel agents.</td> </tr> <tr> <td></td> <td>Receives alerts of unread available test results from the alert manager agent.</td> </tr> <tr> <td></td> <td>Notifies the physician that test results are available.</td> </tr> <tr> <td></td> <td>Queries the integration agent for test results according to search criteria determined by the physician.</td> </tr> <tr> <td></td> <td>Receives test results data from the integration agent.</td> </tr> <tr> <td></td> <td>Displays test results data to the physician.</td> </tr> <tr> <td></td> <td>Informs the alert manager agent about the read/unread status of the test results sent to the physician.</td> </tr> <tr> <td></td> <td>Informs the audit agent before and after each action.</td> </tr> <tr> <td><strong>Lab personnel agent</strong></td> <td>Notifies the alert manager agent that test results are available.</td> </tr> <tr> <td></td> <td>Notifies the physician agents that results are available.</td> </tr> <tr> <td></td> <td>Informs the audit agent before and after each action.</td> </tr> <tr> <td><strong>Lab director agent</strong></td> <td>Receives alerts from the alert manager agent signaling the abnormal unread status of a test result.</td> </tr> <tr> <td></td> <td>Reports alert to the lab director.</td> </tr> <tr> <td></td> <td>Acknowledges the alert manager agent that the lab director read the alert sent to him.</td> </tr> <tr> <td></td> <td>Informs the audit agent before and after each action.</td> </tr> <tr> <td><strong>Alert manager agent</strong></td> <td>Alerts the lab director agent as soon and as the abnormal unread status of a given test result is detected.</td> </tr> <tr> <td></td> <td>Receives test results from the lab personnel agent.</td> </tr> <tr> <td></td> <td>Receives from the physician agent the status “test results have been read by physician.”</td> </tr> <tr> <td></td> <td>Receives from the lab director agent the status “alert message has been acknowledged by the lab director.”</td> </tr> <tr> <td></td> <td>Informs the audit agent before and after every action.</td> </tr> <tr> <td><strong>Integration agent</strong></td> <td>Retrieves test results from cLIS, based on the query issued by the physician agent or the lab director agent.</td> </tr> <tr> <td></td> <td>Delivers extracted test results to the requester agent.</td> </tr> <tr> <td></td> <td>Informs the audit agent before and after every action.</td> </tr> <tr> <td><strong>Audit agent</strong></td> <td>Receives the actual action start/end notifications and log them with their date and time.</td> </tr> </tbody> </table> (v) Jacques clicks the “confirm” button to acknowledge receipt of the nlab-009 alert and thus directs his ldAgent to send this acknowledgment to amAgent. (vi) AmAgent updates the status of nlab-009 as “read,” and removes the nlab-009 announcement from his own internal list. This terminates the monitoring of nlab-009 by amAgent. (vii) Once the announcement is flagged as “read,” Jacques’s ldAgent and Tuan’s pAgent remove nlab-009 from their respective windows. (viii) Throughout the above-simulated scenario, each agent sends to the audit agent (adAgent) the start and stop times of every performed task along with its relevant information (date and time, involved actors, action, etc.). We have simulated some specimens to demonstrate the working of assistant agents in the MediMAS prototype and the benefits of a software agent approach to enhance a legacy information system. In order to fully grasp the power of our solution, one however must consider the real laboratory, where hundred of specimen analysis are ordered everyday by dozen of physicians. After a rather simple configuration process, each human actor will be able to transparently rely on his software counterpart to be reminded what he has to do next with respect to the hospital regulations. Furthermore, all communication exchanges and reminder warnings will be coordinated, timely delivered to all the appropriate actors, and properly logged for further references. At this stage, the attentive reader has certainly noticed that we used a very high level approach in order to describe the concrete run-time working of the MediMAS prototype. It is, however, very important for her to understand that MediMAS components are not just plain objects, but they are, indeed, software agents in the sense of the definition given at the end of Section 2.1. Because of that, the use of agent technologies in general and of an agent platform in particular is a necessity if one does not want to reinvent the wheel by implementing from scratch many low-level services such as naming and yellow pages services, code mobility support, debugging and monitoring/management facilities, security mechanism, agent communication, or resource control. For example, the alert manager agent, amAgent introduced above, is a running program object, with its own thread of control (i.e., having its own autonomy), which (i) reacts to physician and lab personnel agents messages by updating its test results pending list; (ii) has an aim to timely detect and to act upon test results with abnormal unread status; (iii) acts autonomously (i.e., without the necessity of a special external event or method call) in order to fulfill its goal. It does so by constantly monitoring its test results pending list and by sending warning messages to the appropriate agents (physician and lab director ones) according to the hospital regulations. Messages are based on the FIPA ACL Message standard [10], and the behaviors or agent “intelligence” are programmed in Java classes using either plain procedural code or declarative rules with the help of the Jess to JADE Toolkit developed by our research group [27]. Note that with the latter technology, it is even possible to change the agent behavior by modifying rules at run-time (e.g., escalating up the organizational hierarchy after two instead of three unsuccessful warnings or warning another physician in the same group if available instead of the lab director). 5. Development Methodology We have designed our own “in-house” methodology, inspired by the theoretical foundations mentioned in Section 2.3. More precisely, we adapted the JADE Methodology [18] to our own purposes by integrating the ontology in the earlier phases of the modelling process. Our strategy has been applied to develop the MediMAS prototype. The next paragraphs present it in four phases (see Figure 7), while Figure 8 summarizes it and put in evidence the relationships between its different phases. 5.1. Phase I: Real-World System Analysis. The analyst perceives the current system in order to understand its goals, problems, and its future requirements. This phase aims at defining a common vocabulary and describing the current organization of entities (actors, human agents), use cases, and/or business processes of the system. The deliverables of Phase I consist in a well-defined set of goals and requirements, the common vocabulary describing the entities with their organization, a set of identified use cases, and business processes. In our case study, the outputs of our real-world system analysis are the three-layer information system structure of the HCF Laboratory (Figure 1), and UML activity diagrams of its business processes (Figure 9). 5.2. Phase II: Domain Ontology Definition. The Domain Ontology Definition phase takes the deliverables of Phase I as input and aims at defining the domain or application terminology standards and semantics. To this end, the analyst focuses on concepts, actions, predicates and relations between concepts. In MediMAS, we adopt the following guidelines to build the ontology: (i) Concepts are substantives (e.g., doctor, patient, analysis, etc.). (ii) Actions are verbs or verbal phrases (e.g., SendResult, Alert, SendAvailableList, etc.). (iii) Predicates are expressions that make statements about something, which can be evaluated true, false or indeterminate (e.g., isTestResultCritical, isResultConfirmed, etc.). (iv) Relations are expressions that establish the relationship between concepts. The output of this phase is the domain or application ontology, that actors will use to understand each other in their communications. In software engineering, ontology development tools, such as Protégé [28], TopBraidComposer [29], etc., have been developed in order to assist the ontologists to build the domain or application ontology efficiently. The interested reader is referred to [30] for a graphical overview of the ontology we defined using the Protégé suite of tools. 5.3. Phase III: Agent-Based Modelling. The modelling phase consists in the following set of tasks using the deliverables of Phase I and II as inputs: (i) identify and create eligible software agents which will be assigned to actors; (ii) determine the tasks (also called the responsibilities) of each agent; (iii) specify the workflow of elementary operations in each task and the agent’s operational behavior; (iv) assign tasks, workflows, and behaviors to agents according to their roles in the organization. Figures 7 and 8 draw our attention to the iterative nature of the tasks within Phase III on one side, and between Phases II and III on the other side. Indeed, successive refinement steps are required in order to enrich the domain ontology as new concepts, actions, predicates, and relations between concepts are identified. The deliverables of this phase are the documents: (i) describing the agents in different categories, and (ii) specifying all the tasks, workflows, and behaviours, and their assignment to agents. The agent categories and their assigned tasks in MediMAS are summarized in Table 2. 5.4. Phase IV: Implementation. The previous phases are platform-independent. In Phase IV, the selection of a platform closely impacts the implementation process. In our case study, the JADE platform was selected to implement the MediMAS prototype. This phase involves the programmer team to implement and test the agent-based system according to the model specifications. To this end, the programmers use the deliverables of the previous phases as inputs, and then translate them into system components which are extensions of the existing classes in JADE, namely: (i) designed agents are translated into classes of agents according to the terminology used in JADE; (ii) designed tasks, workflows, and behaviours are converted into classes of behaviours in the sense of JADE. The domain ontology must also be implemented as extensions of the existing ontology in JADE. This task is achieved: (i) either by manually coding vocabulary, bean classes, ConceptSchema, AgentActionSchema, Predicate-Schema, and so forth, or (ii) through the bean generator plug-in for Prot´eg´e [31]. The completion of phase IV results in a multiagent system that fulfills the defined user goals and requirements and operates on the selected platform. It would be out of the scope of this article to fully describe the software architecture of the MediMAS prototype. It is nevertheless worth giving an overview of its main software components. (The interested reader can find the class diagram of MediMAS as implemented on the JADE platform in [30] and its complete source code is available at [32].) A typical layered approach has been adopted (see Figure 10): the upper layer is an abstract layer providing the basic classes, interfaces, and agent types, and it directly extends the JADE platform. The second layer offers the main functionalities and default behaviors for each kind of identified agent type: resource integration for seamless interfacing with legacy systems, audit agent for addressing logging issues, and alike system supervisor agents which enhance the system with some new services and the personnel assistant agents which embody the “virtual twin” paradigm. Note that this latter category is split into core and user interface agents. This separation allows for a one-to-many relationship between a personal agent’s core part and several user interface agents which are deployed on the humans’ computing devices (desktops/laptops and/or smartphones and/or web browsers, etc.). These agent families form the main vertical blocks of our architecture. Eventually, the lowest layer is dedicated for application specific implementations of the agents. In the case of MediMAS, this layer contains (i) the WinDMLAB integration agent, (ii) the alert manager agent, (iii) the lab assistant, lab director and physician personnel assistant agent. Beside these blocks, there are two further components (rightmost on Figure 10): one for ontology related issues and one for miscellaneous tools and utilities. This layered architecture actually provides a general framework that could be used for other application domains than our medical laboratory use case. In order to reuse the framework, one could simply inject a new ontology, attach the according behaviours to the personnel assistant agents, and implement the business logic of the system supervisor agents. 6. Conclusion This research paper discusses major features and benefits of our agent-based approach to enhance a hospital laboratory legacy information system. Such approach preserves the investment in the legacy system and allows developers to seamlessly add new features, which aim at filling the automation gap, satisfying the needs of growing user mobility, and providing intelligent assistance to users. Finally, a methodology to systematically adopt and implement such a solution is proposed and it is validated with the implementation of the concrete MediMAS prototype. 6.1. Achievements. The current version of the MediMAS prototype provides physicians, lab personnel, and lab director with software agents running on desktop computers. (The whole source code and related documentation are available for download from [32].) These agents act as personal assistants to free the actors from tedious and routine work so that they can really concentrate on their medical activities. 6.2. Work in Progress 6.2.1. Mobile MediMAS. Our research will extend the model to allow software agents to run on mobile devices (e.g., PDAs, mobile phones, smartphones, etc.). The agents that work for the same owner on different devices must collaborate and synchronize their tasks to efficiently assist the owner who may work anywhere and anytime. A first prototypal version of this extended model is already available [32, 33], but still needs some fine tuning. 6.2.2. MediMAS Simulation Tool. The development of a simulation tool for MediMAS is another topic of our research. The tool offers the HealthCare experts the opportunity to visualize the working of MediMAS prototype by simulation, and to get an insight in the properties of an agent-based system in the HealthCare domain (ubiquitousness, intelligence, reactiveness, proactiveness, scalability, etc.). A first version of the tool is now available [34] and has been extensively used in order to debug and test the MediMAS prototype. 6.2.3. Adaptive MediMAS Agents. Withing another project, we developed the Jess to JADE (J2J) toolkit [27], which allows JADE agents to seamlessly use the Jess rule engine [35] in order to perform appropriate behavior. This solution has been tested on our alert manager agent and it allowed us to declaratively define and modify the agent behavior at runtime. 6.2.4. Methodology Enhancement. The light in-house agent-based system design methodology has been defined, and applied in the MediMAS experimental project in Healthcare domain. Future extensions will enhance the methodology with additional modelling possibilities to design more complex real-world systems. References
{"Source-Url": "https://www3.unifr.ch/inf/softeng/en/assets/public/files/research/publications/pdf/medimasIJTA.pdf", "len_cl100k_base": 7944, "olmocr-version": "0.1.49", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 37043, "total-output-tokens": 10316, "length": "2e12", "weborganizer": {"__label__adult": 0.0006203651428222656, "__label__art_design": 0.0006642341613769531, "__label__crime_law": 0.0007557868957519531, "__label__education_jobs": 0.003696441650390625, "__label__entertainment": 0.0001080632209777832, "__label__fashion_beauty": 0.0003619194030761719, "__label__finance_business": 0.0006976127624511719, "__label__food_dining": 0.0006666183471679688, "__label__games": 0.0010547637939453125, "__label__hardware": 0.002277374267578125, "__label__health": 0.0123443603515625, "__label__history": 0.0005359649658203125, "__label__home_hobbies": 0.00019919872283935547, "__label__industrial": 0.0006775856018066406, "__label__literature": 0.00045990943908691406, "__label__politics": 0.0003390312194824219, "__label__religion": 0.0006198883056640625, "__label__science_tech": 0.2421875, "__label__social_life": 0.00017750263214111328, "__label__software": 0.0235595703125, "__label__software_dev": 0.70654296875, "__label__sports_fitness": 0.0005779266357421875, "__label__transportation": 0.0007214546203613281, "__label__travel": 0.00033736228942871094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 44564, 0.01497]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 44564, 0.29468]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 44564, 0.90315]], "google_gemma-3-12b-it_contains_pii": [[0, 3909, false], [3909, 8905, null], [8905, 11041, null], [11041, 13057, null], [13057, 14184, null], [14184, 19039, null], [19039, 20869, null], [20869, 21563, null], [21563, 22288, null], [22288, 27943, null], [27943, 32964, null], [32964, 38426, null], [38426, 44564, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3909, true], [3909, 8905, null], [8905, 11041, null], [11041, 13057, null], [13057, 14184, null], [14184, 19039, null], [19039, 20869, null], [20869, 21563, null], [21563, 22288, null], [22288, 27943, null], [27943, 32964, null], [32964, 38426, null], [38426, 44564, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 44564, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 44564, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 44564, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 44564, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 44564, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 44564, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 44564, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 44564, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 44564, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 44564, null]], "pdf_page_numbers": [[0, 3909, 1], [3909, 8905, 2], [8905, 11041, 3], [11041, 13057, 4], [13057, 14184, 5], [14184, 19039, 6], [19039, 20869, 7], [20869, 21563, 8], [21563, 22288, 9], [22288, 27943, 10], [27943, 32964, 11], [32964, 38426, 12], [38426, 44564, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 44564, 0.12295]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
9ed876d231c5fb13e6ce3ae7f7ce874e98080bfb
Progressive Indexes Timbó Holanda, P.T. Citation Version: Publisher's Version License: Licence agreement concerning inclusion of doctoral thesis in the Institutional Repository of the University of Leiden Downloaded from: https://hdl.handle.net/1887/3212937 Note: To cite this publication please use the final published version (if applicable). 1 Introduction The major drawback of Progressive Indexes is that they are only designed for static databases. However, in the interactive data analysis scenario, the data is not static but rather frequently updated with batches of data that must be appended. If we take the flight dataset example presented in Chapter 2 we can consider the scenario where batches of data are regularly appended since new flights happen all the time (e.g., either data is appended every few minutes, hours, days, depending on how critical is to analyze recent data). One way of adapting the current Progressive Indexing strategy to support updates is to use the techniques developed for merging updates on Adaptive Indexes since they produce similar intermediate incremental indexes. However, these merging techniques follow Adaptive Indexing’s philosophy of lazy query execution, drastically decreasing robustness (i.e., it creates performance spikes that vary the per-query response time in orders of magnitudes up and down), with no guaranteed convergence and high penalties for larger batches of appends. In this chapter, we introduce Progressive Mergesort. Progressive Mergesort is designed to efficiently merge batches of appends while following Progressive Indexing’s core design decisions. It presents a low-query impact even for large batches, high robustness, and guaranteed convergence (i.e., all elements are merged into one array). 1.1 Contributions The main contributions of this chapter are: - We introduce a novel Progressive Indexing technique that focuses on merging batches of appends into our main Progressive Indexing run. - We experimentally verify that the Progressive Mergesort provides a more robust, predictable, and faster performance through various batch sizes and update frequencies. - We provide Open-Source implementations of Progressive Mergesort. 1.2 Outline This chapter is organized as follows. In Section 2, we investigate related research on updating Adaptive Indexes, called Adaptive Merges. In Section 3, we describe our novel Progressive Mergesort technique and discuss its benefits and drawbacks. In Section 4, we perform an experimental evaluation of each of the novel methods we introduce, and we compare it against adaptive merging techniques. Finally, in Section 5 we draw our conclusions. 2 Related Work There are three main algorithms designed to efficiently merge appends into adaptive indexes [34], the Merge Complete, Merge Gradual, and Merge Ripple, and we will refer to these algorithms as Adaptive Merges from now on. They follow the same philosophy of Adaptive Indexing by only merging appends when necessary. They differ from each other in terms of what data they will merge and how they merge it. In the following subsections, we overview each algorithm and present an example of their execution. Besides the strategies to efficiently merge appends into the index’s column, Holanda et al. [29] presents a strategy to prune cold data from the cracker index to boost updates. However, we do not explore this strategy in this work since it directly goes against our full convergence philosophy. Chapter 5. Progressive Merges 2.1 Merge Complete (MC) This algorithm completely merges the full *Appends* vector into the *Cracker Column* as soon as a query requests data that is also present in the *Appends* vector. Figure 5-1 depicts an example of merge complete executing the query *A < 8*. In our example, the column is already partitioned around three pivot points 8, 10, and 14. Since the appends vector contains element 6 (i.e., an element that qualifies the query), the whole appends vector is merged. The first step of the merge is to resize our cracker column to `cracker_column.size() + appends.size()`, followed by a copy of the appends elements to the end of the column and the deletion of the appends vector. Then we must swap the newly added elements that are in the wrong piece to their correct piece. In this case, elements 6, 8, and 11 are swapped with elements in the current piece’s border with the last piece. After performing the swaps, we update the cracker index pointer for 14 to point at the correct place, considering the newly inserted elements. This process is repeated until all inserted elements are placed in the correct pieces. In our example, we perform 6 swaps, and we update all 3 nodes of --- 1Our implementations and benchmarks are available at [https://github.com/pdet/ProgressiveMergesort](https://github.com/pdet/ProgressiveMergesort) our cracker index. At the end of the execution, the appends list is empty. 2.2 Merge Gradual (MG) Merge Gradual differs from Merge Complete concerning the amount of data merged per query. It only merges elements that qualify for the currently executing query. Figure 5-2 presents the algorithm executing the $A < 8$ query in the same cracker column as before. A binary search using the query predicates is performed in the Appends vector. The elements that qualify for the query, in this case only the value 6, are merged to the cracker index. As before, value 6 is initially placed at the end of the cracker column and erased from the appends vector. Value 6 is then swapped until it reaches its correct piece, with the nodes in the cracker index being updated accordingly. Note that 3 swaps are done in this case, all 3 nodes from the cracker index are updated, and 25% of the values in the appends vector are merged. 2.3 Merge Ripple (MR) **Merge Ripple (MR)** Like Merge Complete, the Merge Ripple algorithm only merges the elements that qualify for the query predicates. They differ on how they merge them. In the Merge Ripple, instead of resizing the Cracker Column and appending the element to its end as its first step, it starts by swapping the to-be inserted element with the first element in the next greater-neighboring piece from its correct piece. Figure 5-3 depicts an example of Merge Ripple executing the query $A < 8$. In our example, the column is already partitioned around three pivot points (8, 10, 14), and the appends array contains four values (6, 8, 11, 17). Since we only need to insert element 6 from the appends array, we perform a cracker index lookup and identify the element’s piece (i.e., the first piece holding 6, 4, 2, and 7). We then go to the successor piece (i.e., piece 2 with elements 8 and 9) and swap the first element of that piece (8) with the element in our appends (6). After that, we only need to update the cracker index node that points to the value 8. In this case, we only had to perform 1 swap and update 1 node in the cracker. However, our append list remains with the same size it had at the start of the algorithm. Merge Ripple performs fewer swaps and updates than the previous algorithms while merging the necessary amount of data to our index. **Discussion.** The Merge Complete algorithm presents the highest convergence since it fully merges the appends list whenever the appends vector has elements that qualify for the query. However, it will potentially present high-performance spikes when performing such merges. The Merge Ripple is expected to present lower performance spikes since it only merges what is necessary, avoiding column resizes, 3. Progressive Mergesort Progressive Mergesort is a Progressive Indexing technique inspired by the mergesort algorithm [17] and used for merging appends into the main Progressive Indexing structure. It follows the three pillars of progressive indexes: (1) low impact on query execution, (2) robust performance, and (3) guaranteed convergence. It relies on an index-budget $\delta$ that represents the percentage of the indexed per-query data, guaranteeing that the same amount of effort will be distributed for the entire workload. In practice, during query execution, the $\delta$ defined for our Progressive Indexing algorithm is used for both the main index structure and Progressive Mergesort. Progressive Mergesort follows two distinct canonical phases, the refinement phase and the merge phase described in this section. **Refinement.** In the refinement phase, we can use any of the other proposed Progressive Indexing algorithms, getting the most performance depending on data distribution and workload. Our budget is used as described in chapter 3 depending on the algorithm executing the refinement. In this work, we decided to experiment with Progressive Quicksort as our algorithm of choice. Utilizing the other algorithms is left as an engineering exercise for future work. **Merge.** At the end of the refinement phase of any Progressive Indexing algorithm, the result is a sorted list. When all merge chunks are fully sorted, we progressively merge them into one sorted chunk. We perform a progressive two-way merge in order to merge these chunks. Figure 5-4 depicts a high-level concept of Progressive Mergesort. In this figure, red vectors are completely unsorted vectors, yellow are partially sorted vectors, and green are completely sorted. We start with our main index structure only partially sorted and with a new batch of appends. It starts with the refinement phase. At this step, any Progressive Indexing technique swaps, and index node updates. However, it also presents a slow convergence and can present large performance spikes when the workload shifts to a piece where many elements must be merged. The Merge Gradual seems to be the best balance between robustness and convergence, but robustness issues similar to the Merge Ripple are still expected. Another major problem of these algorithms is the necessity of having a fully sorted appends list to merge the data efficiently. In the original paper, only small batches were used in the experiments. However, when facing large appends, the necessary a-priori sort of the append list will present a major performance bottleneck. can be used and will continue their execution until reaching completely sorted lists. When all chunks are entirely sorted, the second phase of Progressive Mergesort starts. Here, the Append arrays are progressively merged into one array. One might note that new batches can be introduced while other batches are already being refined. In this case, a Progressive Mergesort run will be initiated to newly appended chunks. All these chunks use the same $\delta$ as our main progressive index but normalized to the chunk size. Only when the original Progressive Indexing column and the appends are fully sorted (i.e., we have one sorted column for the Progressive Indexing and one sorted column for all the appends) and the appends have the same or bigger size as the Progressive Indexing column we merge them. Figure 5-5 depicts an example of Progressive Mergesort with $delta = 0.5$. We start with two batches of updates. In the initial iterations, we execute Progressive Quicksort as the refinement phase. In Refine (1), a Progressive Quicksort iteration is initiated for each chunk, since $\delta = 0.5$ both iterations index half of each chunk around one pivot. In Refine (3) both Progressive Quicksort iterations ended, and both chunks are fully sorted. Hence we will start the merge phase of Progressive Mergesort in the following query. In Merge (1) we start to merge both lists using a two-way merge algorithm, and we stop when the resulting list is half complete due to our delta. For the chunks that are being merged, we must store the offsets where we stopped merging. Finally, in Merge (2) we end the merge phase with one completely sorted append list and delete the previous chunks. **Query Processing.** When executing a query on a column with Progressive Indexing, we might encounter several arrays (i.e., the original Progressive Indexing column and batches of appends that started to be refined but are not yet merged) with different levels of refinement. During the query execution, each array must be checked to return the elements that fit the query predicates. If the array is already fully sorted, a binary search will be executed to return the result. Otherwise, the array will be at some step of the refinement phase. Hence a lookup on the binary tree is necessary to return the offsets that match the query predicates. **When to Merge.** In this work, we decided to first completely merge all appends into one, fully sorted, append array. If this array has a size equal to or bigger than the current Progressive Indexing column, we merge both. This decision was made to avoid frequent resizes of large arrays (e.g., if we merged the Progressive Indexing column with every append first, this would result in a resize for the progressive column at every batch, which would be prohibitively expensive). However, this decision is not necessarily optimal for all workloads. Having multiple arrays increase the random access to respond to the workload while diminishing the merge costs creating a trade-off depending on when and how these merges are performed. Creating an algorithm that decides when is the appropriate moment to merge these different arrays and which arrays should be merged is out of this chapter’s scope, and we leave it as future work. Listing 5 depicts a C++ like implementation of Progressive Mergesort. The Progressive Mergesort has as its input a vector of columns representing the chunks that are being refined, a Column representing the current set of updates, a double with the delta, the query predicates, the result structure, a pointer to the merge column, and a parameter indicating the minimum size the update column must have before entering the refinement phase. In the first for loop (lines 5-11), we iterate through all chunks and execute the query on each chunk. On line 6, we normalize our delta to the size of the chunk. Line 7 executes a Progressive Quicksort call that refines and returns the filtered elements of that chunk. These elements are then merged into our result structure. While checking each chunk, we also check if they are all sorted since we only start the merge phase after all chunks are already sorted. In the second for loop (lines 12-15), we check if any of the elements in our current update column qualifies for the range query. If so, we add it to the result structure. In the first if (lines 16-20), we initiate a merge of the two last chunks in our vector if no merge is currently happening and all chunks are sorted. The second if (lines 21-29) performs the actual merge, we calculate a normalized budget for the size of the merge_column and progressively build it. Lines 33-37 check if the merge is already finished. If it is already done, we delete the merged chunks from our chunk vector and add the newly merged chunk to the vector. We also set the pointer to the merge_column to null to indicate that we can initiate other merges. The final if (lines 30-34) check if the updates column has reached a size bigger than the minimum necessary for it to become a chunk. If so, we initiate a Progressive Quicksort refinement that will be refined in the following queries. We add it to our chunk vector and create a new update column to hold the next appends. 4. Experimental Analysis This section provides an experimental evaluation of Progressive Mergesort and compares it with the Adaptive Merges techniques. 4.1 Setup We implemented the Progressive Mergesort algorithm and the Adaptive Merges in a stand-alone program written in C++. The Progressive Mergesort uses Progressive Quicksort in its refinement phase. **Compilation.** This application was compiled with GNU g++ version 7.2.1 using optimization level -O3. **Machine.** All experiments were conducted on a machine equipped with 256 GB main memory and an 8-core Intel Xeon E5-2650 v2 CPU @ 2.6 GHz with 20480 KB L3 cache. **Appends.** All experiments have three parameters regarding the appends, (1) the *batch_size* that represents the size of a batch of appends, (2) the *frequency* which represents an interval of queries where a new batch of appends is executed, and (3) *start_after* that describes how many queries need to be executed before the first append happens. With these three parameters we calculate the number of appends that will be executed \( total\_appends = \frac{total\_queries - start\_after}{frequency} \times batch\_size \), and divide our data set into the *original_column* set that represents our initially loaded column and the *appends* set that represent the appends that will be inserted. **Data set.** We generate a synthetic data set composed of \( N + total\_appends \) unique 8-byte integers, with \( N \in \{10^7, 10^8, 10^9\} \) and representing the original column size. After generating the data set, we shuffle it following a uniform-random distribution and divide it into our original column and a list of appends. **Workload.** Unless stated otherwise, all experiments consist of a synthetic workload with \( 10^4 \) queries in the form \[ \text{SELECT SUM(R.A) FROM R WHERE R.A BETWEEN } V_1 \text{ AND } V_2 \]. A random value is selected for \( V_1 \) and \( V_2 = V_1 + (N + total\_appends) \times 1\% \). **Configuration.** We experiment with 3 main configurations. - **High Frequency Low Volume (HFLV):** A batch of appends with \( batch\_size = 0.001\% \times N \) executed every 10 queries. - **Medium Frequency Medium Volume (MFMV):** A batch of appends with \( batch\_size = 0.01\% \times N \) executed every 100 queries. - **Low Frequency High Volume (LFHV):** A batch of appends with \( batch\_size = 0.1\% \times N \) executed every 1000 queries. 4.2 Performance Comparison In this work, we decided to use the Adaptive Merges algorithms only with Adaptive Indexing due to the increased complexity of implementing them to work with Progressive Indexing and leave this task as an engineering exercise for future work. Since the base indexing algorithm is different for the Adaptive Merges and Progressive Mergesort, we decided to start appending data after 1000 queries to have refined indexes and better isolate the actual append cost from early index creation. Hence we avoid the noise of partitioning the original_column and focus on the actual merges from the appends. Our Progressive Mergesort uses a fixed $\delta$ of 0.1 in all experiments. ![Graphs showing performance comparison](image) Figure 5-6: Progressive Mergesort and Adaptive Merges ($N = 10^7$ and start_after = 1000) Figure 5-6 depicts a per-query performance comparison of Progressive Mergesort and Adaptive Merges. This experiment uses a data set with $N = 10^7$ and runs all three configurations described in the previous section. We continue this section by describing two observations present in all experiments, (1) regarding the column resizes and (2) an overall query robustness analysis. **Resizes.** In all three configurations, HFLV, MFMV, and LFHV, we can notice that all three Adaptive Merges present a performance spike right after the start of Chapter 5. Progressive Merges the updates around query 1000. The main reason for this spike is the need to resize the *Cracker Column* when appending new data. Since this resize reserves two times the space of the original *Cracker Column*, it only happens once. It is also possible to notice that with Merge Ripple, the spike occurs 100 queries later than Merge Complete and Merge Gradual. This is because Merge Ripple avoids resizing the *Cracker Column* by swapping the data from the *Appends* and the column with the actual resize only happening when we are in the last piece. This problem does not exist with Progressive Mergesort since we perform a `vector.reserve()` to allocate memory to the merge vector, and filling out the merge vector is completed over multiple queries. **Robustness.** The Merge Complete presents the lowest robustness from all algorithms. Whenever a merge happens, it has a big spike upwards since it completely merges it. Merge Gradual is the second-worst. Since it completely merges all elements that qualify the predicate, it does not have one big performance spike, spreading those merges through many queries. This is particularly visible in Figure 5-6c that depicts the low-frequency high volume experiment (i.e., at every 1000 queries, a batch of size $10^4$ is inserted. One can see that at every 1000 queries, there is an upwards spike that slowly decreases for 500 queries and then has a slop down since most of the *Appends* array was merged by that point. From the Adaptive Merges, the Merge Ripple presents the least variance. All queries slightly increase their cost with increasing updates. Finally, the Progressive Mergesort presents the lowest variance, with no performance spikes up. One can notice that all algorithms present spikes downwards at the same queries overall three configurations. These are caused by noise due to the way we select our query predicates to fix our workload selectivity. Since we create our second query predicate as $V_2 = V_1 + (N + total\_appends) \times 1\%$. Queries might not have exactly 1\% selectivity if the data is not completely merged in the column. Since the figures are with the y-axis in log scale, small differences in the selectivity produce these downwards performance spikes. 4.3 Varying Data Sizes Table 5.1 depicts the total execution cost for the workload, excluding the initial 1000 queries. On all experiments, Progressive Mergesort presented approximately 2x better performance than the best performing Adaptive Merge algorithm. The main reason for this performance difference is that all Merge Adaptive algorithms must keep the appends sorted to merge them efficiently. This problem impacts Merge Ripple the Table 5.1: Cumulative Time (s) <table> <thead> <tr> <th>Workload</th> <th>MC</th> <th>MG</th> <th>MR</th> <th>PM</th> </tr> </thead> <tbody> <tr> <td>HFLV</td> <td>2.72</td> <td>3.52</td> <td>2.57</td> <td>1.07</td> </tr> <tr> <td>MFMV</td> <td>2.18</td> <td>3.39</td> <td>2.45</td> <td>1.07</td> </tr> <tr> <td>LFHV</td> <td>2.00</td> <td>2.55</td> <td>2.34</td> <td>1.06</td> </tr> <tr> <td></td> <td>22.76</td> <td>26.16</td> <td>26.61</td> <td>10.64</td> </tr> <tr> <td>HFLV</td> <td>20.25</td> <td>26.14</td> <td>25.19</td> <td>10.72</td> </tr> <tr> <td>MFMV</td> <td>22.14</td> <td>22.42</td> <td>23.89</td> <td>10.63</td> </tr> <tr> <td>LFHV</td> <td>209.25</td> <td>221.67</td> <td>295.39</td> <td>104.77</td> </tr> <tr> <td></td> <td>206.39</td> <td>219.39</td> <td>267.94</td> <td>104.96</td> </tr> <tr> <td></td> <td>197.89</td> <td>200.62</td> <td>250.62</td> <td>103.95</td> </tr> </tbody> </table> most since it tends to keep a larger appends array due to its lazier merging property. That means that a larger array must be re-sorted at every append insertion. One might notice that the results of Adaptive Merges seem to directly contradict Idreos et al. [34], where Merge Ripple was the best performing algorithm of the three. The HFLV with $N = 10^7$ is the only experiment with the same parameters as the original paper and showcases a similar result, with Merge Ripple being the fastest of the Adaptive Merges. However, as discussed before, with larger appends Merge Ripple starts to lose its benefit of fewer swaps to keep the append vector sorted. One other interesting result is the variance in the total cost depending on the configuration of the workload. The Adaptive Merges algorithms present a much higher variance than Progressive Mergesort for the same data size. This is more prominent with larger data sizes. Taking $N = 10^9$ as an example, Merge Complete presents a variance of 11.36s, Merge Gradual of 21.05s, Merge Ripple of 44.72s, and Progressive Mergesort of 1.01s. Compared to the Adaptive Merges algorithms, Progressive Mergesort has a very low variance from configurations at the same data size. This is due to the Progressive Mergesort algorithm not performing a complete sort in the append list but rather properly refining and merging it depending on their data size. Table 5.2 depicts the order of magnitude of each workload’s query variance on all 3 data sizes. We only calculate the query variance after executing the first 1000 queries. Note that the lower the variance, the more robust the algorithm is. As expected, Merge Complete presents the lowest robustness since it completely merges the Appends array to the Cracker Column causing a huge performance spike. The Merge Gradual and Merge Ripple are better than the Merge Complete since they only merge tuples that qualify the query predicates. Progressive Mergesort presents the highest robustness due to its indexing budget, effectively offering more fine-grained control over the stream of queries. ### 4.4 Appends during Index Creation To perform a fair comparison of the Adaptive Merges and Progressive Mergesort, we only initiated the updates after 1000 queries to minimize the initial index creation cost of Adaptive Indexing and Progressive Indexing. However, after 1000 queries, the Progressive Indexing is already fully converged (i.e., the main index is a sorted list). In this experiment, we want to evaluate Progressive Mergesort’s impact during Progressive Indexing’s creation phase (i.e., Initialization and Refinement phases). In our setup, we use a dataset with $N = 10^7$, a workload with 1% selectivity and 200 queries, and three different update setups. All update setups start at the first query and perform appends at every ten queries. They differ on the batches’ size, with batches of size 100, 1000, and 10000. Figure 5-7 depicts the per-query cost for the 200 queries. The height of the performance spikes are strongly correlated to the batch sizes, with larger batches introducing a higher spike. This happens due to our strategy using a fixed delta (i.e., a % of the total size of the data that is indexed per-query) for the entire workload. Hence the more data we ingest, the actual per-query cost will increase since the data size increases. One way of minimizing this issue is to extend the cost models proposed in chapter 3 to automatically generate a value for $\delta$ to reduce query variance. We leave that algorithm as an exercise for future work. 5. Summary This chapter introduces the *Progressive Mergesort*, a novel progressive algorithm used to merge batches of appends. We compare it to the state-of-the-art merging algorithms from adaptive indexing techniques and show how they perform under multiple synthetic benchmarks. Our solution is more robust and faster than the state-of-the-art. Figure 5-7: Progressive Mergesort before index convergence.
{"Source-Url": "https://scholarlypublications.universiteitleiden.nl/access/item%3A3212945/view", "len_cl100k_base": 6069, "olmocr-version": "0.1.50", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 32996, "total-output-tokens": 6778, "length": "2e12", "weborganizer": {"__label__adult": 0.000308990478515625, "__label__art_design": 0.0003731250762939453, "__label__crime_law": 0.00039458274841308594, "__label__education_jobs": 0.0011892318725585938, "__label__entertainment": 9.40561294555664e-05, "__label__fashion_beauty": 0.00016891956329345703, "__label__finance_business": 0.00042724609375, "__label__food_dining": 0.0003294944763183594, "__label__games": 0.0005445480346679688, "__label__hardware": 0.0006427764892578125, "__label__health": 0.00047969818115234375, "__label__history": 0.0003063678741455078, "__label__home_hobbies": 9.387731552124023e-05, "__label__industrial": 0.0004467964172363281, "__label__literature": 0.00034308433532714844, "__label__politics": 0.0003085136413574219, "__label__religion": 0.00042724609375, "__label__science_tech": 0.059661865234375, "__label__social_life": 0.00012177228927612303, "__label__software": 0.017425537109375, "__label__software_dev": 0.9150390625, "__label__sports_fitness": 0.00023055076599121096, "__label__transportation": 0.0003628730773925781, "__label__travel": 0.0002046823501586914}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26633, 0.05652]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26633, 0.25401]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26633, 0.90551]], "google_gemma-3-12b-it_contains_pii": [[0, 489, false], [489, 1919, null], [1919, 3631, null], [3631, 5013, null], [5013, 6204, null], [6204, 7729, null], [7729, 10347, null], [10347, 11597, null], [11597, 12866, null], [12866, 15599, null], [15599, 15752, null], [15752, 18013, null], [18013, 19397, null], [19397, 22114, null], [22114, 24701, null], [24701, 26224, null], [26224, 26633, null]], "google_gemma-3-12b-it_is_public_document": [[0, 489, true], [489, 1919, null], [1919, 3631, null], [3631, 5013, null], [5013, 6204, null], [6204, 7729, null], [7729, 10347, null], [10347, 11597, null], [11597, 12866, null], [12866, 15599, null], [15599, 15752, null], [15752, 18013, null], [18013, 19397, null], [19397, 22114, null], [22114, 24701, null], [24701, 26224, null], [26224, 26633, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26633, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26633, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26633, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26633, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26633, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26633, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26633, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26633, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26633, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26633, null]], "pdf_page_numbers": [[0, 489, 1], [489, 1919, 2], [1919, 3631, 3], [3631, 5013, 4], [5013, 6204, 5], [6204, 7729, 6], [7729, 10347, 7], [10347, 11597, 8], [11597, 12866, 9], [12866, 15599, 10], [15599, 15752, 11], [15752, 18013, 12], [18013, 19397, 13], [19397, 22114, 14], [22114, 24701, 15], [24701, 26224, 16], [26224, 26633, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26633, 0.10377]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
6bdc18e212f5cb1eba97a2ddc599e3fe139412d4
1 Introduction In the last note, we introduced the concept of isolation as one of the ACID properties. Let’s revisit our definition here: - **Isolation**: Execution of each Xact is isolated from that of others. In reality, the DBMS will interleave actions of many Xacts and not execute each in order of one after the other. The DBMS will ensure that each Xact executes as if it ran by itself. This note will go into details on how the DBMS is able to interleave the actions of many transactions, while guaranteeing isolation. 2 Two Phase Locking What are locks, and why are they useful? Locks are basically what allows a transaction to read and write data. For example, if Transaction T1 is reading data from resource A, then it needs to make sure no other transaction is modifying resource A at the same time. So a transaction that wants to read data will ask for a Shared (S) lock on the appropriate resource, and a transaction that wants to write data will ask for an Exclusive (X) lock on the appropriate resource. Only one transaction may hold an exclusive lock on a resource, but many transactions can hold a shared lock on data. **Two phase locking (2PL)** is a scheme that ensures the database uses conflict serializable schedules. The two rules for 2PL are: - Transactions must acquire a S (shared) lock before reading, and an X (exclusive) lock before writing. - Transactions cannot acquire new locks after releasing any locks – this is the key to enforcing serializability through locking! The problem with this is that it does not prevent **cascading aborts**. For example, - T1 updates resource A and then releases the lock on A. - T2 reads from A. • $T_1$ aborts. • In this case, $T_2$ must also abort because it read an uncommitted value of $A$. To solve this, we will use **Strict Two Phase Locking**. Strict 2PL is the same as 2PL, except all locks get released together when the transaction completes. ### 3 Lock Management Now we know what locks are used for and the types of locks. We will take a look at how the Lock Manager\(^1\) manages these lock and unlock (or acquire and release) requests and how it decides when to grant the lock. The LM maintains a hash table, keyed on names of the resources being locked. Each entry contains a granted set (a set of granted locks/the transactions holding the locks for each resource), lock type (S or X or types we haven’t yet introduced), and a wait queue (queue of lock requests that cannot yet be satisfied because they conflict with the locks that have already been granted). See the following graphic: When a lock request arrives, the Lock Manager checks if any Xact in the Granted Set or in the Wait Queue want a conflicting lock. If so, the requester gets put into the Wait Queue. If not, then the requester is granted the lock and put into the Granted Set. In addition, Xacts can request a lock upgrade: this is when a Xact with shared lock can request to upgrade to exclusive. The Lock Manager will add this upgrade request at the front of the queue. \(^1\)We will refer to the Lock Manager as LM sometimes. Here is some pseudocode for how to process the queue; note that it doesn’t explicitly go over what to do in cases of promotion etc, but it’s a good overview nevertheless. # If queue skipping is not allowed, here is how to process the queue H = set of held locks on A Q = queue of lock requests for A def request(lock_request): if Q is empty and lock_request is compatible with all locks in H: grant(lock_request) else: addToQueue(lock_request) def release_procedure(lock_to_release): release(lock_to_release) for lock_request in Q: # iterate through the lock requests in order if lock_request is compatible with all locks in H: grant(lock_request) # grant the lock, updating the held set else: return Note that this implementation does not allow queue skipping. When a request arrives under a queue skipping implementation, we first check if you can grant the lock based on what locks are held on the resource; if the lock cannot be granted, then put it at the back of the queue. When a lock is released and the queue is processed, grant any locks that are compatible with what is currently held. For an example of queue skipping and pseudocode, see the appendix. It relies on you understanding multigranulariy locking however, so make sure to read section 7 first to understand the example. 4 Deadlock We now have a lock manager that will put requesters into the Wait Queue if there are conflicting locks. But what happens if $T_1$ and $T_2$ both hold $S$ locks on a resource and they both try upgrade to $X$? $T_1$ will wait for $T_2$ to release the $S$ lock so that it can get an $X$ lock while $T_2$ will wait for $T_1$ to release the $S$ it can get an $X$ lock. At this point, neither transaction will be able to get the $X$ lock because they’re waiting on each other! This is called a **deadlock**, a cycle of Xacts waiting for locks to be released by each other. 4.1 Avoidance One way we can get around deadlocks is by trying to avoid getting into a deadlock. We will assign the Xact’s **priority** by its age: now - start time. If $T_i$ wants a lock that $T_j$ holds, we have two options: - **Wait-Die**: If $T_i$ has higher priority, $T_i$ waits for $T_j$; else $T_i$ aborts - **Wound-Wait**: If $T_i$ has higher priority, $T_j$ aborts; else $T_i$ waits --- Important Detail: If a transaction re-starts, make sure it gets its original timestamp. 4.2 Detection Although we avoid deadlocks in the method above, we end up aborting many transactions! We can instead try detecting deadlocks and then if we find a deadlock, we abort one of the transactions in the deadlock so the other transactions can continue. We will detect deadlocks by creating and maintaining a “waits-for” graph. This graph will have one node per Xact and an edge from $T_i$ to $T_j$ if: - $T_j$ holds a lock on resource X - $T_i$ tries to acquire a lock on resource X, but $T_j$ must release its lock on resource X before $T_i$ can acquire its desired lock. For example, the following graph has an edge from $T_1$ to $T_2$ because after $T_2$ acquires a lock on B, $T_1$ tries to acquire a conflicting lock on it. Thus, $T_1$ waits for $T_2$. Example: <table> <thead> <tr> <th>T1:</th> <th>S(A) S(D)</th> <th>S(B)</th> </tr> </thead> <tbody> <tr> <td>T2:</td> <td>X(B)</td> <td></td> </tr> <tr> <td>T3:</td> <td></td> <td></td> </tr> <tr> <td>T4:</td> <td></td> <td></td> </tr> </tbody> </table> If a transaction $T_i$ is waiting on another transaction $T_j$ (i.e. there is an edge from $T_i$ to $T_j$), then $T_i$ cannot acquire any new locks. Therefore, a transaction $T_k$ will not wait for $T_i$ on a resource $X$ unless $T_i$ had acquired a conflicting lock on $X$ before it began waiting for $T_j$. Consider the example below, while keeping in mind that only lock acquisitions are shown in schedule, not lock releases. Example: T1: X(A) T2: S(A) X(B) T3: S(B) X(A) There is an edge from T2 to T1 because T1 holds an X lock, when T2 requests a conflicting S lock on resource A. Once T2 waits for T1 to finish with resource A, none of T2’s operations can proceed until it is removed from the wait queue. This is why T3 does not wait for T2 when acquiring an S lock on B, since T2 was never actually able to acquire an X lock on B, as it was still waiting on T1. Similarly, when T3 goes to acquire an X lock on A, it need only wait for T1 since at that point in time the only transaction with a conflicting lock on A is T1. Note that at that point both T2 and T3 will be in the wait queue for resource A. We will periodically check for cycles in a graph, which indicate a deadlock. If a cycle is found, we will "shoot" a Xact in the cycle and abort it to break the cycle. Important note: A "waits-for" graph is used for cycle detection and is different from the conflict dependency graph we discussed earlier (in the previous note) which was used to figure out if a transaction schedule was serializable. 5 Lock Granularity So now that we understand the concept of locking, we want to figure out what to actually lock. Do we want to lock the tuple containing the data we wish to write? Or the page? Or the table? Or maybe even the entire database, so that no transaction can write to this database while we’re working on it? As you can guess, the decision we make will differ greatly based upon the situation we find ourselves in. Let us think of the database system as the tree below: The top level is the database. The next level is the table, which is followed by the pages of the table. Finally, the records of the table themselves are the lowest level in the tree. Remember that when we place a lock on a node, we implicitly lock all of its children as well (intuitively, think of it like this: if you place a lock on a page, then you’re implicitly placing a lock on all the records and preventing anyone else from modifying it). So you can see how we’d like to be able to specify to the database system exactly which level we’d really like to place the lock on. That’s why multigranularity locking is important; it allows us to place locks at different levels of the tree. We will have the following new lock modes: - IS: Intent to get S lock(s) at finer granularity. - IX: Intent to get X lock(s) at finer granularity. Note: that two transactions can place an IX lock on the same resource – they do not directly conflict at that point because they could place the X lock on two different children! So we leave it up to the database manager to ensure that they don’t place X locks on the same node later on while allowing two IX locks on the same resource. • SIX: Like S and IX at the same time. This is useful if we want to prevent any other transaction from modifying a lower resource but want to allow them to read a lower level. Here, we say that at this level, I claim a shared lock; now, no other transaction can claim an exclusive lock on anything in this sub-tree (however, it can possibly claim a shared lock on something that is not being modified by this transaction—i.e something we won’t place the X lock on. That’s left for the database system to handle). Interestingly, note that no other transaction can claim an S lock on the node that has a SIX lock, because that would place a shared lock on the entire tree by two transactions, and that would prevent us from modifying anything in this sub-tree. The only lock compatible with SIX is IS. Here is the compatibility matrix below; interpret the axes as being transaction \( T_1 \) and transaction \( T_2 \). As an example, consider the entry X, S — this means that it is not possible for \( T_1 \) to hold an X lock on a resource while \( T_2 \) holds an S lock on the same resource. NL stands for no lock. <table> <thead> <tr> <th>Mode</th> <th>NL</th> <th>IS</th> <th>IX</th> <th>S</th> <th>SIX</th> <th>X</th> </tr> </thead> <tbody> <tr> <td>NL</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> </tr> <tr> <td>IS</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>No</td> </tr> <tr> <td>IX</td> <td>Yes</td> <td>Yes</td> <td>No</td> <td>No</td> <td>No</td> <td>No</td> </tr> <tr> <td>S</td> <td>Yes</td> <td>Yes</td> <td>No</td> <td>Yes</td> <td>No</td> <td>No</td> </tr> <tr> <td>SIX</td> <td>Yes</td> <td>Yes</td> <td>No</td> <td>No</td> <td>No</td> <td>No</td> </tr> <tr> <td>X</td> <td>Yes</td> <td>No</td> <td>No</td> <td>No</td> <td>No</td> <td>No</td> </tr> </tbody> </table> ### 5.1 Multiple Granularity Locking Protocol 1. Each Xact starts from the root of the hierarchy. 2. To get S or IS lock on a node, must hold IS or IX on parent node. 3. To get X or IX on a node, must hold IX or SIX on parent node. 4. Must release locks in bottom-up order. 5. 2-phase and lock compatibility matrix rules enforced as well. 6. Protocol is correct in that it is equivalent to directly setting locks at leaf levels of the hierarchy. 6 Practice Problems 1. Is the following schedule possible under 2PL? S means acquiring a shared lock, X means acquiring an exclusive lock, and U means releasing a lock. | | | | | | ---|---|---|---|---|---| | T1: X(A) X(C) U(A) U(C) | | T2: S(B) U(B) | | T3: | 2. Is the above schedule possible under strict 2PL? 3. For the schedule below, which (if any) transactions will wait under a "wait-die" deadlock avoidance strategy? The priorities in descending order are: T1, T2, T3, T4. | 1 | 2 | 3 | 4 | 5 | 6 | 7 | ---|----|----|----|----|----|----|----| | T1 | S(A) | | | | X(C) | | T2 | X(A) | X(B) | | T3 | | X(B) | | T4 | S(B) | S(C) | 4. For the schedule above, which (if any) transactions will wait under a "wound-wait" deadlock avoidance strategy? The priorities in descending order are: T1, T2, T3, T4. 5. What does the "waits-for" graph look like for the above schedule from problem 3? Is there deadlock? --- ³Here the priorities were provided explicitly, but if they are not explicit then you should default to its age: now - start time, as defined in 4.1. For this schedule the default priorities in descending order would be: T1, T2, T4, T3 (since T4 began before T3). 6. For the database system below, which lock modes (including IS, IX, or SIX) on which resources are necessary to read \( P_a \)? 7. For the database system above, which lock modes (including IS, IX, or SIX) held by other transactions on \( P_a \) would prevent us from modifying \( r_{a1} \)? 7 Solutions 1. Yes, the schedule is possible under 2PL, because no transaction acquires a lock after it begins to release locks. 2. No, the schedule is not possible under strict 2PL, because \( T_1 \) does not release all of its locks at once. Instead, \( T_3 \) is able to acquire a lock on A after \( T_1 \) releases the X lock on A, but before \( T_1 \) releases the X lock on C. Therefore, the schedule violates strict 2PL since \( T_3 \) could potentially abort under a cascading abort. 3. \( T_1 \) and \( T_3 \) \textit{TS refers to timestep (the top row in the schedule).} \( T_2 \) will abort at TS-2 since \( T_2 \) has lower priority than \( T_1 \). \( T_3 \) will wait for \( T_4 \) at TS-5 since \( T_3 \) has higher priority than \( T_4 \). \( T_1 \) will wait for \( T_4 \) at TS-7 since \( T_1 \) has higher priority than \( T_4 \). 4. \( T_2 \) \( T_2 \) will wait for \( T_1 \) at TS-2 since \( T_2 \) has lower priority than \( T_1 \). \( T_4 \) will abort at TS-5 since \( T_3 \) has higher priority than \( T_4 \). 5. There is no deadlock, because there is no cycle in the waits-for graph. There is an edge from \( T_2 \) to \( T_1 \) since \( T_2 \) waits for \( T_1 \) at TS-2. This means there is no edge from \( T_2 \) to \( T_4 \) at TS-4 since \( T_2 \) is already waiting for another transaction. There is an edge from \( T_3 \) to \( T_4 \) at TS-5. There is also an edge from \( T_1 \) to \( T_4 \) at TS-7. 6. We would need the IS lock mode on \( DB \) and \( T_1 \), and the S lock mode on \( P_a \). This allows us to read from \( P_a \) while restricting other transactions as little as possible. 7. S, SIX, and X lock modes held by other transactions on \( P_a \) would prevent us from holding an X lock on \( r_{a_1} \), which is necessary to modify \( r_{a_1} \). IX and IS locks would not prevent us, as the actual X or S locks held by other transactions are not necessarily on \( r_{a_1} \). Appendix We now provide a formal proof for why the presence of a cycle in the waits-for graph is equivalent to the presence of a deadlock. We use $\alpha_j(R_i)$ to represent the lock request of lock type $\alpha_j$ on the resource $R_i$ by transaction $T_j$. We use $\beta_{ij}(R_i)$ to represent a lock held of the lock type $\beta_{ij}$ on the resource $R_i$ by transaction $T_j$. **Definition 1. Deadlock** A deadlock is a sequence of transactions (with no repetitions) $T_1, \ldots, T_k$ such that: - for each $i \in [1, k)$, $T_i$ is requesting a lock $\alpha_i(R_i)$, $T_{i+1}$ holds the lock $\beta_{i,i+1}(R_i)$, and $\alpha_i$ and $\beta_{i,i+1}$ are incompatible, and - $T_k$ is requesting a lock $\alpha_k(R_k)$, $T_1$ holds the lock $\beta_{k,1}(R_k)$, and $\alpha_k$ and $\beta_{k,1}$ are incompatible. **Definition 2. Waits-for Graph** Let $T = \{T_1, \ldots, T_n\}$ be the set of transactions and let $D_i \subseteq T$ be defined as follows: - if $T_i$ is blocked while requesting some lock $\alpha_i(R_i)$, then $D_i$ is the set of transactions $T_j$ that hold locks $\beta_{ij}(R_i)$ where $\alpha_i$ and $\beta_{ij}$ are incompatible, - otherwise, $D_i = \emptyset$. The waits-for graph is the directed graph $G = (V, E)$ with $V = \{1, \ldots, n\}$ and $E = \{(i, j) : T_j \in D_i\}$. **Theorem.** There is a simple cycle in the waits-for graph $G \iff$ there is a deadlock. **Proof.** Assume there is a simple cycle $C = \{(i_1, i_2), \ldots, (i_{k-1}, i_k), (i_k, i_1)\} \subseteq E$. By definition of the waits-for graph, $(i, j) \in E \iff T_j \in D_i$, or alternatively, that $T_j$ holds a lock $\beta_{ij}(R_i)$ while $T_i$ is blocked requesting $\alpha_i(R_i)$, and $\alpha_i$ and $\beta_{ij}$ are incompatible. Therefore, $(i_j, i_{j+1}) \in C \subseteq E \iff T_{i_{j+1}}$ holds a lock $\beta_{i_j,i_{j+1}}(R_{i_j})$ while $T_{i_j}$ is blocked requesting $\alpha_{i_j}(R_{i_j})$, where $\alpha_{i_j}$ and $\beta_{i_j,i_{j+1}}$ are incompatible. A similar result holds for $(i_k, i_1)$. But this is simply the definition of a deadlock on the transactions $T_{i_1}, \ldots, T_{i_k}$, so we have our result. $\square$ Queue Skipping An example of queue skipping is the following: Suppose, on resource A, that $T_1$ holds IS and $T_2$ holds an IX lock. The queue has, in order, the following requests: $T_3 : X(A), T_4 : S(A), T_5 : S(A)$, and $T_6 : SIX(A)$. Now, let $T_2$ release its lock. Instead of processing the queue in order and stopping when a conflicting lock is requested (which would result in no locks being granted, as $T_3$ is at the front and wants $X(A)$), queue skipping processes the queue in order, granting locks one by one whenever compatible. Here, it would look at $T_3$'s $X(A)$ request, determine that $X(A)$ is incompatible with the IS(A) lock $T_1$ holds, and move to the next element in the queue. It would then grant $T_4$'s $S(A)$ request, as it is compatible with the held locks of A, and add $T_4 : S(A)$ to the set of locks held on A. It would then look at $T_5 : S(A)$, determine that it is compatible with $T_4 : S(A)$ and $T_1 : IS(A)$, and grant it. Finally, it would look at $T_6 : SIX(A)$, see that it is incompatible with $T_4 : S(A)$ and $T_5 : S(A)$ in the held set, and not grant it as a result. Here is some pseudocode for processing the queue, but this time with queue skipping: ```python # If queue skipping is allowed, here is how to process the queue H = set of held locks on A Q = queue of lock requests for A def request(lock_request): if lock_request is compatible with all locks in H: grant(lock_request) else: addToQueue(lock_request) def release_procedure(lock_to_release): release(lock_to_release) for lock_request in Q: # iterate through the lock requests in order if lock_request is compatible with all locks in H: grant(lock_request) # grant the lock, updating the held set ```
{"Source-Url": "https://cs186berkeley.net/resources/static/notes/n12-Xact2.pdf", "len_cl100k_base": 5403, "olmocr-version": "0.1.50", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 29216, "total-output-tokens": 5828, "length": "2e12", "weborganizer": {"__label__adult": 0.0002639293670654297, "__label__art_design": 0.0001773834228515625, "__label__crime_law": 0.0003311634063720703, "__label__education_jobs": 0.0006551742553710938, "__label__entertainment": 5.0008296966552734e-05, "__label__fashion_beauty": 0.00010961294174194336, "__label__finance_business": 0.00033020973205566406, "__label__food_dining": 0.00031447410583496094, "__label__games": 0.00046753883361816406, "__label__hardware": 0.0008120536804199219, "__label__health": 0.0005044937133789062, "__label__history": 0.00018107891082763672, "__label__home_hobbies": 0.00012433528900146484, "__label__industrial": 0.00049591064453125, "__label__literature": 0.00017511844635009766, "__label__politics": 0.00019872188568115232, "__label__religion": 0.0003497600555419922, "__label__science_tech": 0.0291748046875, "__label__social_life": 7.992982864379883e-05, "__label__software": 0.009735107421875, "__label__software_dev": 0.95458984375, "__label__sports_fitness": 0.0002548694610595703, "__label__transportation": 0.0003445148468017578, "__label__travel": 0.0001722574234008789}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 18942, 0.01906]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 18942, 0.19094]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 18942, 0.91501]], "google_gemma-3-12b-it_contains_pii": [[0, 1670, false], [1670, 3096, null], [3096, 4468, null], [4468, 5537, null], [5537, 6884, null], [6884, 7971, null], [7971, 9635, null], [9635, 11522, null], [11522, 12760, null], [12760, 14184, null], [14184, 15006, null], [15006, 17165, null], [17165, 18942, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1670, true], [1670, 3096, null], [3096, 4468, null], [4468, 5537, null], [5537, 6884, null], [6884, 7971, null], [7971, 9635, null], [9635, 11522, null], [11522, 12760, null], [12760, 14184, null], [14184, 15006, null], [15006, 17165, null], [17165, 18942, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 18942, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 18942, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 18942, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 18942, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 18942, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 18942, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 18942, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 18942, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 18942, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 18942, null]], "pdf_page_numbers": [[0, 1670, 1], [1670, 3096, 2], [3096, 4468, 3], [4468, 5537, 4], [5537, 6884, 5], [6884, 7971, 6], [7971, 9635, 7], [9635, 11522, 8], [11522, 12760, 9], [12760, 14184, 10], [14184, 15006, 11], [15006, 17165, 12], [17165, 18942, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 18942, 0.13415]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
a9f9fddef5fc263a8e52941f18e61babd5f0a93c
[REMOVED]
{"Source-Url": "http://www.star.dist.unige.it/~marco/Data/15lpnmr-comp.pdf", "len_cl100k_base": 6853, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 33784, "total-output-tokens": 9802, "length": "2e12", "weborganizer": {"__label__adult": 0.0004498958587646485, "__label__art_design": 0.0005903244018554688, "__label__crime_law": 0.0007319450378417969, "__label__education_jobs": 0.002819061279296875, "__label__entertainment": 0.000194549560546875, "__label__fashion_beauty": 0.0002772808074951172, "__label__finance_business": 0.0006246566772460938, "__label__food_dining": 0.0005955696105957031, "__label__games": 0.0014142990112304688, "__label__hardware": 0.0011005401611328125, "__label__health": 0.00119781494140625, "__label__history": 0.0004880428314208984, "__label__home_hobbies": 0.0001932382583618164, "__label__industrial": 0.0009937286376953125, "__label__literature": 0.0004978179931640625, "__label__politics": 0.0006880760192871094, "__label__religion": 0.0006785392761230469, "__label__science_tech": 0.280029296875, "__label__social_life": 0.0001900196075439453, "__label__software": 0.0109710693359375, "__label__software_dev": 0.693359375, "__label__sports_fitness": 0.0005726814270019531, "__label__transportation": 0.0009908676147460938, "__label__travel": 0.00027489662170410156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37701, 0.04209]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37701, 0.1151]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37701, 0.88965]], "google_gemma-3-12b-it_contains_pii": [[0, 2678, false], [2678, 6025, null], [6025, 8324, null], [8324, 10966, null], [10966, 14711, null], [14711, 18036, null], [18036, 21080, null], [21080, 24018, null], [24018, 27099, null], [27099, 30267, null], [30267, 33691, null], [33691, 37372, null], [37372, 37701, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2678, true], [2678, 6025, null], [6025, 8324, null], [8324, 10966, null], [10966, 14711, null], [14711, 18036, null], [18036, 21080, null], [21080, 24018, null], [24018, 27099, null], [27099, 30267, null], [30267, 33691, null], [33691, 37372, null], [37372, 37701, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37701, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37701, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37701, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37701, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37701, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37701, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37701, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37701, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37701, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37701, null]], "pdf_page_numbers": [[0, 2678, 1], [2678, 6025, 2], [6025, 8324, 3], [8324, 10966, 4], [10966, 14711, 5], [14711, 18036, 6], [18036, 21080, 7], [21080, 24018, 8], [24018, 27099, 9], [27099, 30267, 10], [30267, 33691, 11], [33691, 37372, 12], [37372, 37701, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37701, 0.19459]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
cf9b2c1a1b29efa41d6d35d02e5e529a53170a5e
[REMOVED]
{"len_cl100k_base": 6976, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 55906, "total-output-tokens": 8051, "length": "2e12", "weborganizer": {"__label__adult": 0.000705718994140625, "__label__art_design": 0.0004372596740722656, "__label__crime_law": 0.0012273788452148438, "__label__education_jobs": 0.0005970001220703125, "__label__entertainment": 0.00010478496551513672, "__label__fashion_beauty": 0.0002760887145996094, "__label__finance_business": 0.0015544891357421875, "__label__food_dining": 0.0006513595581054688, "__label__games": 0.0010213851928710938, "__label__hardware": 0.0015544891357421875, "__label__health": 0.00150299072265625, "__label__history": 0.0003418922424316406, "__label__home_hobbies": 0.00015366077423095703, "__label__industrial": 0.0009069442749023438, "__label__literature": 0.00041794776916503906, "__label__politics": 0.0006504058837890625, "__label__religion": 0.0006093978881835938, "__label__science_tech": 0.12060546875, "__label__social_life": 0.00012105703353881836, "__label__software": 0.00766754150390625, "__label__software_dev": 0.857421875, "__label__sports_fitness": 0.00048160552978515625, "__label__transportation": 0.0009531974792480468, "__label__travel": 0.0002589225769042969}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32611, 0.02645]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32611, 0.4008]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32611, 0.87226]], "google_gemma-3-12b-it_contains_pii": [[0, 2352, false], [2352, 5068, null], [5068, 7736, null], [7736, 11045, null], [11045, 12762, null], [12762, 14737, null], [14737, 17007, null], [17007, 20138, null], [20138, 22094, null], [22094, 24454, null], [24454, 26097, null], [26097, 27014, null], [27014, 28799, null], [28799, 31854, null], [31854, 32611, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2352, true], [2352, 5068, null], [5068, 7736, null], [7736, 11045, null], [11045, 12762, null], [12762, 14737, null], [14737, 17007, null], [17007, 20138, null], [20138, 22094, null], [22094, 24454, null], [24454, 26097, null], [26097, 27014, null], [27014, 28799, null], [28799, 31854, null], [31854, 32611, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32611, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32611, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32611, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32611, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32611, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32611, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32611, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32611, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32611, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32611, null]], "pdf_page_numbers": [[0, 2352, 1], [2352, 5068, 2], [5068, 7736, 3], [7736, 11045, 4], [11045, 12762, 5], [12762, 14737, 6], [14737, 17007, 7], [17007, 20138, 8], [20138, 22094, 9], [22094, 24454, 10], [24454, 26097, 11], [26097, 27014, 12], [27014, 28799, 13], [28799, 31854, 14], [31854, 32611, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32611, 0.15517]]}
olmocr_science_pdfs
2024-12-12
2024-12-12
9468b224f2bd53b01e33272b4bf1d3217bc4a672
Security Considerations for Microservice Architectures Daniel Richter, Tim Neumann and Andreas Polze Hasso Plattner Institute at University of Potsdam, P.O.Box 90 04 60, D-14440 Potsdam, Germany Keywords: Security, Dependability, Cloud Infrastructure, Microservices. Abstract: Security is an important and difficult topic in today’s complex computer systems. Cloud-based systems adopting microservice architectures complicate that analysis by introducing additional layers. In the test system analyzed, base layers are combined into three groups (compute provider, encapsulation technology, and deployment) and possible security risks introduced by technologies used in these layers are analyzed. The application layer focuses on security concerns that concern authorization and authentication. The analysis is based on a microservice-based rewritten version of the seat reservation system of the Deutsche Bahn using technologies such as Amazon Web Services, Docker, and Kubernetes. The comparison concludes that the security of communication in the test system could be significantly improved with little effort. If security is not considered as an integral part from the beginning of a project, it can easily be neglected and be expensive to add later on. 1 INTRODUCTION In microservice architectures, a complex system is split into multiple, small and mostly independently operating components, which communicate only via well-defined interfaces. This allows each component to be developed, tested, and scaled independently ((Richardson, 2017; Newman, 2015; Horsdal, 2016; Fowler, 2016)). While microservice architectures can reduce the complexity of a given system, it usually introduces – in comparison to monolithic applications – additional complexity through dependencies to supporting technology e.g. for deployment, scaling and management of containerized applications. In addition, the use of additional technologies increases the surface attack area ((Dragoni et al., 2017)). To get an overview of technologies dependencies that a introduced in cloud-based applications that adopt a microservice architecture, we built an application based on Amazon Web Services, Docker, and Kubernetes, which is experimental, microservice-enabled reimplementation of the electronic seat reservation system of the Deutsche Bahn. It consists of a customer component (responsible for managing login data), a seat component (providing queryable train schedules and available seats) and a booking component (managing all booking data). Additionally, each of these components is backed by a separate database. The front-ends were developed for two display devices (single-page web/mobile application and ticket machine) for which four additional services were introduced (two front-end services and two Backend-for-Frontend services). Our test system is deployed to Amazon Web Services (AWS). AWS introduces a variety of additional layers into the system: Firstly, the actual physical computers in an AWS data center. This is followed by three core AWS compute resources: Elastic Compute Cloud (EC2)\(^1\), which provides and manages virtual machines; Elastic Block Storage (EBS)\(^2\), which provides networked data storage volumes to EC2 instances; and Virtual Private Cloud (VPC)\(^3\), which offers isolated networks for EC2 instances. Inside AWS, the test system consists of several EC2 instances. All EC2 Kubernetes nodes run the Kubernetes node administration software, responsible for further running other software on the node. The other important piece of software running on Kubernetes nodes is Docker, which manages the individual containers used to run the actual software deployed by Kubernetes. To simplify the analysis, we split our testbed into three base layer groups: compute provider, encapsulation technology and deployment. The highest layer, the application layer, is the most complex layer in the --- \(^1\)https://aws.amazon.com/ec2/details/ \(^2\)https://aws.amazon.com/ebs/details/ \(^3\)https://aws.amazon.com/vpc/details/ system. Our focus was to secure the communication between individual application components (authentication and authorization). 2 THE BASE LAYERS In this section, the base layers – all layers except the application layer – are analyzed. Since multiple layers often work together to provide one function, the layers have been organized into the three groups compute provider, encapsulation technology and deployment. Following that comparison, the security of the chosen technologies is analyzed. 2.1 Technologies for Layer Groups As the technology used in each layer can directly impact its security analysis, we first compare multiple alternative technologies for each layer group. Our test system’s layers are grouped as follows: The compute provider group consists of all AWS related layers and generally provides some kind of computing infrastructure consisting of either physical or virtual machines, some networking solution, and some file storage system. The encapsulation technology group mainly consists of the Docker layer and the Weave layer. Both can be used independently of each other; here, they work together to provide a distributed runtime environment for containers. This group is responsible for isolating services from each other so they cannot interfere with each other (except by predefined communication). The deployment group contains the Kubernetes layers and is responsible for taking software in source or binary format and ensuring its execution and configuration. 2.1.1 Compute Provider A compute provider is required to provide the infrastructure to run some software. The core functionalities are: Starting a new machine (based on some template) and connect it to some network. This usually involves assigning some kind of computing capacity to the new machine, configuring it and starting it. There are two types of machines which can be distinguished: To start a physical machine, some hardware is allocated and configured. In some situations, this may involve purchasing and installing the hardware beforehand. A physical machine itself can run multiple virtual machines. Starting a virtual machine usually involves allocating some capacity on an existing physical machine and then starting it from some predefined image. Another important classification is the type of provision: A data center owned by the company planning to use the compute provider, a data center operated by a third party, or a cloud provider (provider of mostly virtual machines with the additional restriction that new machines can be requested in an automated fashion, using an API). Since cloud providers are much more modern than the data center-based approaches, they were the technology of choice for our testbed. The most commonly known commercial cloud providers are AWS, Google Cloud Platform (GCP) and Microsoft Azure ((Coles, 2017)). The most popular self-hosted cloud provider is OpenStack® (Buest, 2014), although it is also offered in hosted form by various third parties. Even though the cloud providers are mostly equal in functionality, AWS was chosen for two reasons: AWS was by far the largest cloud provider ((Coles, 2017)), and it was also the cloud provider of choice of Deutsche Bahn, our project partner. 2.1.2 Encapsulation Technology Encapsulation could be achieved by running each service on a separate machine. The decision to use a separate encapsulation layer was made to achieve a higher degree of flexibility. In microservice architectures services usually are very lightweight and may only run for a short period of time. This makes for example the use of machines provided by AWS uneconomical: AWS bills at least one hour for a started instance and even the smallest EC2 instance type is too large for a single service. Therefore, an encapsulation technology which allows running multiple services on one EC2 instance was needed. On modern operating systems, there are generally two different encapsulation technologies: VM-based encapsulation Each encapsulated process runs in its own virtual machine. This involves some overhead, since hardware such as storage devices needs to be simulated. Since a virtual machine requires an entire operating system running inside it, they are generally rather heavy-weight. Container-based encapsulation Operating system-provided methods are used to isolate processes 4https://cloud.google.com/ 5https://azure.microsoft.com/en-us/ 6https://www.openstack.org/ on the host system. This imposes a smaller overhead than a VM-based approach and also allows resources to be easily shared between encapsulated processes or with the host. As a limitation, only encapsulating software for the same operating system as the host is supported. Each technology has several advantages and disadvantages; the most important arguments are listed below: (a) VM-based solutions provide greater isolation than container-based solutions. A vulnerability in the encapsulated software could thus have a greater impact when using containers. (b) Container-based solutions have a lower overhead, which allows for more efficient usage of computing resources (Felter et al., 2015). (c) VM-based solutions can run software independently from the host operating system, whereas container-based solutions only support software written for the host operating system. (d) There is an abundance of tools, infrastructure, and pre-built software available for mainstream container-encapsulation technology. This is not necessarily applicable for VM-based solutions. For our testbed, the choice fell on Docker – a container-based approach –, which has been widely used in IT projects in recent years and for which a lot of tools and pre-built software is available. One further layer is part of the encapsulation group: The networking layer. If containers or virtual machines are used, multiple network addresses (one for each encapsulated piece of software) are needed. Additionally, it may be desirable to allow encapsulated applications to communicate with each other but not with the machines they are running on. As such, a separate network is usually required for encapsulated applications. Some technologies require special support from the host’s networking hardware, whereas others build so-called overlay networks (Galuba and Girdzijauskas, 2009)) where each machine runs a special software which wraps network packets destined for another machine with some metadata and sends the wrapped packets to the other machine using a physical network. Our testbed uses Weave Net that provides a virtual network, available on all nodes, which is used by the software deployed by Kubernetes to communicate. 2.1.3 Deployment Deployment refers to the action of taking a piece of software, configuring it, and ensuring it is running on some machine. While this can be done “by hand”, an automated system was required to allow smooth operation of the testbed and avoid an error-prone, manual process. The project’s continuous integration infrastructure required setting up entirely new environments, each with about half a dozen services. There are several technologies which can be used to distribute containers among multiple nodes, with popular choices being Docker Swarm7 and Kubernetes. 2.2 Security Evaluation of Base Layer Technologies Given our testbed, we shortly discuss selected security aspects of the chosen base layer technologies. 2.2.1 Compute Provider As the data center is managed by Amazon, the security there cannot be influenced by its customers. However, Amazon states that its data centers comply with various commercial and governmental security guidelines (Amazon Web Services, 2017)) such as PCI DSS Level 1 (Payment Card Industry Data Security Standard). Among others, PCI DSS requires a) “Restricting physical access to cardholder data” (which, in the context of AWS, means that physical access to the actual hardware must be restricted), b) “Track and monitor all access to network resources and cardholder data” and c) “Regularly test security systems and processes” (PCI Security Standards Council, 2016)). Given this certification and others, for the scope of this analysis it can be assumed that a AWS data center and hardware is set-up and managed in a secure manner. AWS allows the modification of resources by using either a web interface (AWS Management Console) or an API. Access to the Management Console is secured using a username and password and, depending on the configuration, a two-factor authentication token. Accessing the API requires an access key. AWS offers a fine-grained permission system and the CloudTrail8 service which, if configured, records all access to AWS resources along with which user accessed the resource and how they authenticated to do so. AWS allows the creation of detailed rules for communication between EC2 instances. This feature is used extensively in the test system. Figure 1 shows inbound network rules for an ingress node – which ports are open is tightly restricted with only those ab- 7https://docs.docker.com/engine/swarm/ 8https://aws.amazon.com/de/cloudtrail/ full or no access to the cluster, the API server even provided an unauthenticated and unencrypted endpoint); This changed with Kubernetes 1.6, which introduced Role-Base Access Control (RBAC), a fine-grained permission system. 3 THE APPLICATION LAYER The application layer contains the individual application components and is the most complex layer in the test system. The security analysis of this layer will focus on securing the communication between those individual components. To simplify that analysis, the application components are grouped together into another set of layers, based on how information flows through the system. The most important aspect of securing the communication between application components is preventing unauthorized access, which usually involves the processes of authentication and authorization. 3.1 Authentication and Authorization Methods Various methods exist to implement authentication and authorization in IT systems, some common ones are listed below: - Trust The implemented service trusts that it is only accessed by those parties who should access it. - Network policy A network policy which prevents all but authorized parties from communicating with the service is enforced. - IP-based The service itself makes a decision based on the IP address where the request to the service originates from. - Key/token-based An access key, or access token, is transmitted with each request and only if a known and correct key is passed, access is granted to the service. - MAC-based (Message Authentication Code) The contents of the request, as well as the access key, are passed through a cryptographic hash function and then transmitted. - Signing-based & Certificate-based Asymmetric cryptography is used to sign the request. - Session-based & Password-based The first request to a service is unauthenticated and initiates the session. Subsequent requests identify the session they belong to by, for example, using one of the previous methods. Table 1: Authentication Method Summary. <table> <thead> <tr> <th>Method</th> <th>Fine-grained access control</th> <th>Secret-based</th> <th>Session-based</th> <th>Network-based</th> <th>Stack Level</th> </tr> </thead> <tbody> <tr> <td>Trust</td> <td>No</td> <td>No</td> <td>No</td> <td>No</td> <td>N/A</td> </tr> <tr> <td>Network policy</td> <td>No</td> <td>No</td> <td>No</td> <td>Yes</td> <td>Network</td> </tr> <tr> <td>IP-based</td> <td>Yes</td> <td>No</td> <td>No</td> <td>Yes</td> <td>Network/Application</td> </tr> <tr> <td>Key/token-based</td> <td>Yes</td> <td>Yes, pre-shared</td> <td>No</td> <td>No</td> <td>Application</td> </tr> <tr> <td>MAC-based</td> <td>Yes</td> <td>Yes, pre-shared</td> <td>No</td> <td>No</td> <td>Application</td> </tr> <tr> <td>Signing-based</td> <td>Yes</td> <td>Yes, asymmetric</td> <td>No</td> <td>No</td> <td>Application</td> </tr> <tr> <td>Certificate-based</td> <td>Yes</td> <td>Yes, asymmetric</td> <td>No</td> <td>No</td> <td>Transport</td> </tr> <tr> <td>Session-based</td> <td>Yes, within a session</td> <td>Yes, after session start</td> <td>Yes</td> <td>No</td> <td>Application</td> </tr> <tr> <td>Password-based</td> <td>Yes, pre-shared and after session start</td> <td>Yes</td> <td>No</td> <td>Application</td> <td></td> </tr> </tbody> </table> To properly compare authentication and authorization methods and to analyze their applicability for the different communication channels in the next section, Table 1 gives a summary of all the methods based on various criteria. Each column corresponds to one of the criteria listed below: **Support of fine-grained access control** classifies whether a method supports more granular permissions than either no access or unrestricted access. The addition "within a session" means that users can gain (exclusive) access to additional resources valid for the duration of their session. **Secret-based** classifies whether a method requires clients to store and manage some kind of secret. The following types of secrets are distinguished: - **pre-shared** A secret which must be known to the client and server before the initial request. - **asymmetric** A secret for use with asymmetric cryptography – the client stores a secret key and the server recognizes the associated public key (based on a list of known public keys or using another level of asymmetric cryptography) - **after session start** During the initiation of a session, a secret is sent to or generated on the client. This secret is used during subsequent requests. **Session-based** classifies whether a method makes use of, or requires, sessions. **Network-based** classifies whether a method is network-based. **Stack level** classifies at which level in the technology stack a method operates. Three values are used: - **Network** The method is implemented as part of, or relies on, the network. - **Application** The method is implemented in the application itself. - **Transport** The method is implemented in the transport layer (somewhere between the network and the actual application). The following investigation of the individual communication channels is simplified by ordering the different authentication and authorization methods based on the level of security they provide. To avoid duplicating that analysis, a conditional ordering is defined and given below: **Trust vs. network-based:** Since trust provides no authentication and authorization, network-based methods are more secure than the trust method. **Network policy vs. IP-based** The network policy-based method is generally preferable, as it is independent of the application. The IP-based method however, has the advantage that it supports fine-grained access control, so if that is needed, the IP-based method is the only possible network-based method. **Network-based vs. secret-based** For both methods, the application itself is vulnerable: If an attacker is able to compromise the application, they can gain access to secrets available to the application and act on behalf of the application, using the available network interfaces. For the network-based methods, the network is additionally vulnerable: If an attacker gains sufficient access to --- Footnote: Often multiple methods will be equally applicable to a channel, in which case the method of choice should be the most secure authentication and authorization method. the network, they can impersonate the application. For the secret-based method, the secret distribution mechanism is an additional vulnerability. Thus, which method is more secure depends on whether a compromise of the network or the secret distribution mechanism is more likely. **Token-based vs. MAC-based** Both methods work similarly, however for the MAC-based methods the token is never transmitted over the network, which decreases the risk of it being intercepted. **Mac-based vs. signing-based** A signing-based method uses different keys on the client and server, reducing the risk of compromise. Additionally, depending on the exact implementation, keys can be generated independently of the server, which decreases the complexity and attack surface of the server. **Signing-based vs. certificate-based** While they should offer the same security as signing-based methods, certificate-based methods have the advantage of standardization. This standardization makes it easier to replace an application using the certificate-based methods and also means a lower likelihood of introducing security vulnerabilities compared to implementing custom signing-based methods. **Session-based** A session-based method is usually used in different situations: It can be used when using a secret is impossible and identifies one user over several consecutive requests, but not between sessions. **Password-based** This method is mostly the same as the token-based method. However, instead of sending the token on every request, it is sent only on the first request; afterwards, some kind of session identifier is sent with each request. This makes it more secure than the token-based method, as the risk of leaking the token is reduced. ### 3.2 Evaluation of Authentication and Authorization in our Testbed Our testbed is a simplified reimplementation of the *Elektronische Platzbuchungsanlage* (EPA, “electronic seat reservation and booking system”) of Deutsche Bahn, that is responsible for managing seat reservations in trains all across Germany. It consists of: **Customer component** This component is responsible for managing login data. It is mostly unused in the current project, as it focused on the ticket purchase and seat reservation process. **Seat component** “Seat & schedule component” would probably be a more appropriate name for this component, however the name assigned by the previous project was kept for consistency. This component provides a queryable schedule of all trains, as well as the ability to access which seats are available on a given train. **Booking component** This component manages all booking data: which routes were booked and which seats are reserved on which trains. Additionally, each of those three components is backed by a separate database. The front-ends for those components were developed for two display devices: A single-page web/mobile application and a ticket machine user interface, which was also based on web technologies and built as a single-page web application. ![Figure 2: Overview of Communication Groups](image) The components of our testbed have been grouped together in the following groups/layers to reduce the amount of communication channels which need to be considered: **Data Storage Group** contains only the database backing the customer, seat and booking components. It is only accessed by the core components group, and there is no inter-group communication in the test set-up, although that is certainly possible in other situations. **Core Components Group** contains the three core components, customer, seat and booking. These are accessed by the BFF group and do access the data storage group. Additionally, some requests trigger inter-group communication between the components in the group. **Backend-for-Frontend (BFF) Group** consists of the two BFFs, (Newman, 2015) one for each display device. These are accessed directly by the display devices and communicate with the core components group, if necessary. No inter-group communication happens between the different BFFs. **Front-End Group** consists of the static web servers for the two front-ends. They are accessed by the display devices as well, however, they perform no other communication. As shown in Figure 2, there are seven different communication channels between, within, and with the four groups. This number is further increased by the fact that the two display devices have not been combined, as they have different communication patterns depending on the version: The web/mobile front-end and BFF are open to the public. In this respect, no assumptions can be made about the requests sent to these services. It cannot be assumed that communication with these services will only take place from the official application and in the manner intended by the application developers. The BFF and the frontend must be able to handle all requests, including those made directly by a malicious third party, correctly. The opposite is the case with ticket vending machines: The team controls the hardware and possibly the network that is used for communication with the frontend and the BFF. Therefore, if necessary, it can be assumed that the communication from the ticket machine will only take place in the way intended. But even without this assumption, the hardware of the ticket vending machine is still controlled by the team and can be regarded as trustworthy – at least with a sufficiently secure hardware design – which enables additional security-relevant operations such as cryptography with pre-shared keys. The communication channels distinguish themselves as follows: (a) This communication channel interacts with third-party software, therefore the team did not have full control over the authentication and authorization methods used. (b) Communication between different core components can usually be assumed to happen over a trusted network. (c) Communication between the BFFs and core components is very similar to (b) except that they may reside on separate networks and that the BFFs may be considered untrusted since they are directly accessible from a public network. (d) Communication takes place over a public network and originates from an untrusted device. (e) Following the defense-in-depth approach, it was assumed, that communication takes place over a public network here as well. However, contrary to (d), communication originates from a trusted device. (f) Once again communication happens over a public network from an untrusted device. Since the front-end services only offer static resources which must all be publicly accessible due to the nature of a web application, no authorization or authentication is required or possible here. (g) As opposed to (f), resources accessed using this channel do not have to be publicly accessible. As such, some form of authorization and authentication can be implemented if the resources should remain inaccessible to the public. Similar to (e) it was again assumed that communication takes place over a public network. In total, two authentication and authorization methods were used: Token-based authentication and authorization was used to connect to the database servers; session-based authentication and authorization was used for connections between the display devices and BFFs. ### 4 Conclusion and Future Work This paper evaluated the security of a microservice architecture: It first analyzed the security of the base layers, before focusing on authentication and authorization in the application layer. The practicality of multiple authentication and authorization methods was analyzed in the context of a reimplementation of the Elektronische Platzbuchungsanlage of Deutsche Bahn. In comparison to monolithic applications, the use of cloud-infrastructure (compute provider layer) introduces additional complexity as well as additional attack vectors. Compared to classic VM-based cloud applications, technologies introduced in the encapsulation technology layer lead to the fact that more safety requirements have to be met. Currently, the analysis of additional security concerns is only limited to aspects regarding authorization and authentication (A2:2017; number two of ((Open Web Application Security Project, 2017)). But an increasing number of used technologies affects other risks, too: security misconfiguration (A6:2017), vulnerable compo- ments (A9:2017) or insufficient logging and monitoring (A10:2017). Also, Dev-ops, a software engineering culture and practice aimed at unifying development and operation that is often used in conjunction with microservices, introduces non-production environment exposure as a microservice-specific risk. The main conclusions of this paper are that 1) modern computer systems are very complex, due to the many layers they are made up from, and 2) security is hard, takes effort, and should be an important consideration from the beginning of a project instead of an afterthought. At many points, security measures were not taken “for simplicity” or because “(human) resources were unavailable”. While this may have been acceptable in the test system, a real-world product should never be launched with this many issues or areas of improvement. We believe this shows very clearly why security is such a difficult topic: The benefits are hidden and the costs are high. The implementation of security in several microservices and in all system levels requires effort and careful planning. Once a project has started, security can easily be neglected for more immediately pressing concerns and may be difficult and even more expensive to add later. Even if security is a consideration from the beginning, there is often a choice between complexity and practicality. For example, to increase security, it would be possible to implement not only certificate-based authentication and authorization, but network policy-based authentication and authorization as a second layer of security. However, this would increase costs and complexity. Although the certificate-based method is clearly more secure than the certificate-based method, it is also more complex to implement than a token-based method because an additional infrastructure is required to manage all cryptographic keys. As a final summary, we conclude that security should be considered from the very beginning of planning a system, to be able to implement effective and comprehensive security measures throughout the project – especially if monolithic applications are to be realized based on microservice applications. ACKNOWLEDGEMENTS The authors would like to thank Lena Feinbube, Leonard Marschke, Cornelius Pohl, Robert Beilich, Tim Basel, Timo Traulsen, Henry Hubler, Dr. Stephan Gerberding, Wolfgang Schwab, and Ingo Schwarzer for their support and assistance with this project. REFERENCES
{"Source-Url": "http://www.scitepress.org/Papers/2018/67910/67910.pdf", "len_cl100k_base": 5961, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 26426, "total-output-tokens": 6988, "length": "2e12", "weborganizer": {"__label__adult": 0.0005373954772949219, "__label__art_design": 0.0007119178771972656, "__label__crime_law": 0.0012664794921875, "__label__education_jobs": 0.000640869140625, "__label__entertainment": 0.00010794401168823242, "__label__fashion_beauty": 0.0002090930938720703, "__label__finance_business": 0.0005850791931152344, "__label__food_dining": 0.0003819465637207031, "__label__games": 0.0005993843078613281, "__label__hardware": 0.005382537841796875, "__label__health": 0.0007891654968261719, "__label__history": 0.0003657341003417969, "__label__home_hobbies": 0.00016558170318603516, "__label__industrial": 0.000926494598388672, "__label__literature": 0.00023925304412841797, "__label__politics": 0.000301361083984375, "__label__religion": 0.0004546642303466797, "__label__science_tech": 0.257568359375, "__label__social_life": 0.0001285076141357422, "__label__software": 0.0215606689453125, "__label__software_dev": 0.70556640625, "__label__sports_fitness": 0.0002956390380859375, "__label__transportation": 0.0010118484497070312, "__label__travel": 0.00027871131896972656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32393, 0.02119]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32393, 0.53952]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32393, 0.91978]], "google_gemma-3-12b-it_contains_pii": [[0, 4053, false], [4053, 8512, null], [8512, 13178, null], [13178, 15170, null], [15170, 19477, null], [19477, 23193, null], [23193, 27887, null], [27887, 32393, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4053, true], [4053, 8512, null], [8512, 13178, null], [13178, 15170, null], [15170, 19477, null], [19477, 23193, null], [23193, 27887, null], [27887, 32393, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32393, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32393, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32393, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32393, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32393, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32393, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32393, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32393, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32393, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32393, null]], "pdf_page_numbers": [[0, 4053, 1], [4053, 8512, 2], [8512, 13178, 3], [13178, 15170, 4], [15170, 19477, 5], [19477, 23193, 6], [23193, 27887, 7], [27887, 32393, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32393, 0.07285]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
1ec8bb69012e1119aecbcf7fb6eb8994bbbba2e5
Algorithmic Cost and Complexity There are two aspects of algorithmic performance: • Time - Instructions take time. - How fast does the algorithm perform? - What affects its runtime? • Space - Data structures take space - What kind of data structures can be used? - How does choice of data structure affect the runtime? Measuring Performance For example: A simple calculator: Perform the four basic arithmetic functions: - Addition - Subtraction - Multiplication - Division Prompt the user for: - Operand 1 - Operand 2 - Operator Algorithm Calculator double op1 // 1st operand op2 // 2nd operand answer ; // result char operator ; // operator // obtain operands and operator from user printf("Enter the first operand: "); scanf("%lf", &op1 ); printf("Enter the second operand: "); scanf("%lf", &op2 ); printf("Enter the operator: "); scanf("%c", &operator ); // perform the calculation if ( operator == '+' ) answer = op1 + op2; if ( operator == '-' ) answer = op1 - op2; if ( operator == '*' ) answer = op1 * op2; if ( operator == '/' ) answer = op1 / op2; printf("The answer is %f \n", answer ); // end algorithm Calculator Analyzing Work Done How many operations does Calculator do? - read/write pairs (to obtain data) - Testing conditionals - Branching - Performing operation - Assigning variables Note: We will ignore the read/write instructions. They deal with the world “outside the algorithm” and involve factors beyond what we care about here. Measures of Work (ignoring read/write pairs) - What’s the best case? Addition - four tests (@ 2 each) - one add - one assignment - total: 10 - What’s the worst case? Division - four tests (@ 2 each) - one divide - one assignment - total: 10 - What’s the average (expected) case? - 10 A Better Way? ```c // Perform the calculation if ( operator == '+' ) answer = op1 + op2; else if ( operator == '-' ) answer = op1 - op2; else if ( operator == '*' ) answer = op1 * op2; else if ( operator == '/' ) answer = op1 / op2; printf( "The answer is %f\n", answer ); // end of algorithm ``` Measures of Work (ignoring read/write pairs) - What’s the best case? - Addition - one test (@ 2 each) - one add - one assignment - total: 4 - What’s the worst case? - Division - four tests (@ 2 each) - one divide - one assignment - total: 10 - What’s the average (expected) case? - (4+6+8+10)/4 = 7 The Dangers of “Average” Work In many circumstances, the assumption of random distribution of input values is a faulty one. What about a cash register? - Addition operators most frequent (ring up an item) - Subtraction less frequent (use a coupon) - Multiplication rare (buy many of same item) - Division very rare (???) The average work in this situation would migrate somewhat towards 4 from the mean of 7 suggested by the assumption of random data. Don’t assume random distribution without reason. Algorithm Analysis: Loops Consider the following nested loops (LOOP1 and LOOP2) intended to sum each of the rows in an NxN two dimensional array, storing the row sums in a one-dimensional array rows and the overall total in grandTotal. **LOOP 1:** ```c grandTotal = 0; for (k=0; k<n-1; ++k) rows[k] = 0; for (j = 0; j <n-1; ++j){ rows[k] = rows[k] + matrix[k][j]; grandTotal = grandTotal + matrix[k][j]; } ``` **LOOP 2:** ```c grandTotal =0; for (k=0; k<n-1; ++k) rows[k] = 0; for (j = 0; j <n-1; ++j) rows[k] = rows[k] + matrix[k][j]; grandTotal = grandTotal + rows[k]; ``` - What is the number of addition operations? 2N² versus N² + N - Assuming we’re working with a hypothetical computer that requires 1 microsecond to perform an addition, for N = 1000, loop 1 would take 2 sec., loop 2 would require just over 1 second. (For N= 100,000 time would be approx. 6 hrs and 3 hours respectively) Big-O Notation - It is a method of algorithm classification. **Definition:** Suppose there exists a function \( f(n) \) defined on nonnegative integers such that the number of operations required by an algorithm for an input size \( n \) is less than or equal to some constant \( c \) times \( f(n) \) (i.e. \( c \* f(n) \)) for all but finitely many \( n \). That is, the number of operations is at worst proportional to \( f(n) \) for all large values of \( n \). Such an algorithm is said to be an \( O[f(n)] \) algorithm. - Loop 1 and Loop 2 are both in the same big-O category: \( O(N^2) \) Example 1: Use big-O notation to analyze the time efficiency of the following fragment of C code: ```c for(k = 1; k <= n/2; k++) { . for (j = 1; j <= n*n; j++) { . } } ``` Since these loops are nested, the efficiency is \( n^3/2 \), or \( O(n^3) \) in big-O terms. Thus, for two loops with \( O(f_1(n)) \) and \( O(f_2(n)) \) efficiencies, the efficiency of the nesting of these two loops is \( O(f_1(n) \times f_2(n)) \). Example 2: Use big-O notation to analyze the time efficiency of the following fragment of C code: ```c for (k=1; k<=n/2; k++) { . } for (j = 1; j <= n*n; j++) { . } ``` The number of operations executed by these loops is the sum of the individual loop efficiencies. Hence, the efficiency is \( n/2 + n^2 \), or \( O(n^2) \) in big-O terms. Thus, for two loops with \( O(f_1(n)) \) and \( O(f_2(n)) \) efficiencies, the efficiency of the sequencing of these two loops is \( O(D_2(n)) \) where \( D_2(n) \) is the dominant of the functions \( f_1(n) \) and \( f_2(n) \). Example 3: Use big-O notation to analyze the time efficiency of the following fragment of C code: ```c k = n; while (k > 1) { . k = k/2; } ``` Since the loop variable is cut in half each time through the loop, the number of times the statements inside the loop will be executed is log₂n. Thus, an algorithm that halves the data remaining to be processed on each iteration of a loop will be an O(log₂n) algorithm. Classification of Algorithms Algorithms whose efficiency is dominated by a log₂n term are often called logarithmic algorithms. Because log₂n will increase much more slowly than n itself, logarithmic algorithms are generally very efficient. Algorithms whose efficiency can be expressed in terms of a polynomial of the form \[ a_m n^m + a_{m-1} n^{m-1} + ... + a_2 n^2 + a_1 n + a_0 \] are called polynomial algorithms. Such algorithms are O(n^m). For m=1, 2, or 3, they are called linear, quadratic or cubic algorithms, respectively. Algorithms with efficiency dominated by a term of the form \( a^n \) are called exponential algorithms. They are of more theoretical rather than practical interest because they cannot reasonably run on typical computers for moderate values of n. Complexity of Linear Search In measuring performance, we are generally concerned with how the amount of work varies with the data. Consider, for example, the task of searching a list to see if it contains a particular value. • A useful search algorithm should be *general*. • Work done varies with the size of the list • What can we say about the work done for list of *any* length? ```c i = 0; while (i < MAX && this_array[i] != target) i = i + 1; if (i < MAX) printf ( "Yes, target is there \n" ); else printf("No, target isn’t there \n"); ``` Order Notation How much work to find the target in a list containing N elements? *Note*: we care here only about the *growth rate* of work. Thus, we *toss out all constant values*. **Best Case** - It’s the first value “order 1,” O(1) **Worst Case** - It’s the last value, N “order N,” O(N) **Average** - N/2 (if value is present) “order N,” O(N) • Best Case work is *constant*; it does not grow with the size of the list. • Worst and Average Cases work is *proportional* to the size of the list, N. **Order Notation** *O(1)* or “Order One”: - does *not* mean that it takes only one operation - *does* mean that the work *doesn’t change* as N changes - *is* a notation for “constant work” *O(N)* or “Order N”: - does *not* mean that it takes N operations - *does* mean that the work changes in a way that is proportional to N - *is* a notation for “work grows at a linear rate” **Improving on Linear Search** Can we do better? Array of the Social Security Numbers of all students in this class. Index is the Social Security Number. | 000 00 0001 | 000 00 0002 | 000 00 0003 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 999 99 9998 | 999 99 9999 | Results is O(1), but wastes HUGE space Getting Realistic - Binary Search • Assume a sorted list of 16 SSNs • Search for one via binary search • How much work is done now? <p>| | | | | |</p> <table> <thead> <tr> <th></th> <th></th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td>16 / 2</td> <td>Comparison #1</td> <td>8 / 2</td> <td>Comparison #2</td> <td>4 / 2</td> </tr> <tr> <td>2 / 2</td> <td>Comparison #4</td> <td></td> <td></td> <td></td> </tr> </tbody> </table> • Worst case: For 16 items, it takes 4 comparisons • In general, it takes \( \log_2 N \) searches \( \log_{16} = 4 \) because \( 2^4 = 16 \) • Binary search is an \( O(\log N) \) algorithm Since, it repeatedly cuts its remaining work in half, binary search involves work that grows at a rate proportional to the log of \( N \). How much better is \( O(\log N) \)? <table> <thead> <tr> <th>( N )</th> <th>( O(\log N) )</th> </tr> </thead> <tbody> <tr> <td>16</td> <td>4</td> </tr> <tr> <td>64</td> <td>6</td> </tr> <tr> <td>256</td> <td>8</td> </tr> <tr> <td>1024 (1Kilo)</td> <td>10</td> </tr> <tr> <td>16,384</td> <td>14</td> </tr> <tr> <td>131,072</td> <td>17</td> </tr> <tr> <td>262,144</td> <td>18</td> </tr> <tr> <td>524,288</td> <td>19</td> </tr> <tr> <td>1,048,576 (1Meg)</td> <td>20</td> </tr> <tr> <td>1,073,741,824 (1Gig)</td> <td>30</td> </tr> </tbody> </table> • As \( N \) gets large, the difference becomes great. Data Structures and Complexity - Can we assume that data are: - sorted and - stored in an appropriate sized array? ```c #define MAX 30 int array[MAX]; ``` - Still... we need to know what N is in advance to declare an Array. - Binary Search Tree (BST) can be very valuable, if N is not predictable. A BST allows O(log N) search performance if certain conditions are met: The tree must be full and balanced. <table> <thead> <tr> <th>Data Structures and Complexity</th> </tr> </thead> <tbody> <tr> <td>Traverse</td> </tr> <tr> <td>----------</td> </tr> <tr> <td>Linked List (sorted)</td> </tr> <tr> <td>Array (sorted)</td> </tr> <tr> <td>Binary Tree</td> </tr> <tr> <td>BST</td> </tr> </tbody> </table> Insertion = cost to find location + cost of insertion. **Bubblesort Revisited** Bubblesort works by comparing and swapping values in a list: <table> <thead> <tr> <th>23</th> <th>78</th> <th>45</th> <th>8</th> <th>32</th> <th>56</th> </tr> </thead> <tbody> <tr> <td>23</td> <td>78</td> <td>45</td> <td>8</td> <td>32</td> <td>56</td> </tr> <tr> <td>23</td> <td>78</td> <td>45</td> <td>8</td> <td>32</td> <td>56</td> </tr> <tr> <td>23</td> <td>78</td> <td>45</td> <td>8</td> <td>32</td> <td>56</td> </tr> <tr> <td>23</td> <td>78</td> <td>45</td> <td>8</td> <td>32</td> <td>56</td> </tr> <tr> <td>23</td> <td>78</td> <td>45</td> <td>8</td> <td>32</td> <td>56</td> </tr> </tbody> </table> **Complexity of Bubblesort** How many comparisons will the inner loop do? (N-1) + (N-2) + (N-3) + ... + 1 Average: N/2 for each “pass” How many “passes” (outer loop) are there? N – 1 Tossing constants: - Each loop involves O(N) work - Inner will be executed for each iteration of outer So what is the complexity? O(N) * O(N) = O(N²) void bubbleSort(int list[], int last) { int current; for (current = 0; current < last; current ++) bubbleUp(list, current, last); return; } /* Move the lowest element in unsorted portion to the current element in the unsorted portion. Pre list must contain at least one element current: beginning of unsorted portion last: identifies end of the unsorted data Post array segment has been rearranged so that lowest element now at beginning of unsorted portion */ void bubbleUp(int list[], int current, int last) { int walker; int temp; for (walker=last; walker > current; walker--) if(list[walker] < list[walker - 1]) {temp = list[walker]; list[walker] = list[walker - 1]; list[walker-1] = temp; } return; } <table> <thead> <tr> <th>N</th> <th>O(LogN)</th> <th>O(N²)</th> </tr> </thead> <tbody> <tr> <td>16</td> <td>4</td> <td>256</td> </tr> <tr> <td>64</td> <td>6</td> <td>4K</td> </tr> <tr> <td>256</td> <td>8</td> <td>64K</td> </tr> <tr> <td>1,024</td> <td>10</td> <td>1M</td> </tr> <tr> <td>16,384</td> <td>14</td> <td>256M</td> </tr> <tr> <td>131,072</td> <td>17</td> <td>16G</td> </tr> <tr> <td>262,144</td> <td>18</td> <td>6.87E+10</td> </tr> <tr> <td>524,288</td> <td>19</td> <td>2.74E+11</td> </tr> <tr> <td>1,048,576</td> <td>20</td> <td>1.09E+12</td> </tr> <tr> <td>1,073,741,824</td> <td>30</td> <td>1.15E+18</td> </tr> </tbody> </table> Complexity of MergeSort Merge sort requires $O(N \log_2 N)$ comparisons. The reasoning: All the merge operations across any given level of the trace diagram will require $O(N)$ comparisons. There are $\log_2 N$ levels. Hence, the overall efficiency is $O(N \log_2 N)$. In level 1: There is one merge operation. We’re merging 2 lists with size $N/2$. In level 2: There are two merge operations. We’re merging 2 pairs of lists with size $N/4$. In the last level (i.e. level $\log_2 N$): There are $N/2$ merge operations. We’re merging $N/2$ pairs of lists with size 1. How much work is involved in each level? • Each of the $N$ numerical values is compared or copied during each level • Therefore, the work for each level is $O(N)$ Thus the total for MergeSort is: $O(\log N) \times O(N) = O(N \log N)$ Example Problems 1. Algorithm A runs in $O(N^2)$ time, and for an input size of 4, the algorithm runs in 10 milliseconds, how long can you expect it to take to run on an input size of 16? 2. Algorithm A runs in $O(\log_2 N)$ time, and for an input size of 16, the algorithm runs in 28 milliseconds, how long can you expect it to take to run on an input size of 64? 3. Algorithm A runs in $O(N^3)$ time. For an input size of 10, the algorithm runs in 7 milliseconds. For another input size, the algorithm takes 189 milliseconds. What was that input size? 4. For an $O(N^k)$ algorithm, where $k$ is a positive rational number, a friend tells you that instance of size $M$ took 16 seconds to run. You run an instance of size $4M$ and find that it takes 256 seconds to run. What is the value of $k$? 5. Algorithm A runs in $O(N^3)$ time and Algorithm B solves the same problem in $O(N^2)$ time. If algorithm A takes 5 milliseconds to complete for an input size of 10, and algorithm B takes 20 milliseconds for an input size of 10, what is the input size that you expect the two algorithms to perform about the same? 6. For an $O(N^{1/3})$ algorithm, an instance with $N = 512$ takes 56 milliseconds. If you used a different-sized data instance and it took 7 milliseconds how large must that instance be? Answers 1. Algorithm A runs in $O(N^2)$ time, and for an input size of 4, the algorithm runs in 10 milliseconds, how long can you expect it to take to run on an input size of 16? $$\frac{4^2}{10\text{ms}} = \frac{16^2}{x} \Rightarrow x = 160\text{ms}$$ 2. Algorithm A runs in $O(\log_2 N)$ time, and for an input size of 16, the algorithm runs in 28 milliseconds, how long can you expect it to take to run on an input size of 64? $$\frac{\log 16}{28\text{ms}} = \frac{\log 64}{x} \Rightarrow x = 42\text{ms}$$ 3. Algorithm A runs in $O(N^3)$ time. For an input size of 10, the algorithm runs in 7 milliseconds. For another input size, the algorithm takes 189 milliseconds. What was that input size? $$\frac{10^3}{7\text{ms}} = \frac{N^3}{189} \Rightarrow N = 30$$ 4. For an $O(N^k)$ algorithm, where $k$ is a positive rational number, a friend tells you that instance of size $M$ took 16 seconds to run. You run an instance of size $4M$ and find that it takes 256 seconds to run. What is the value of $k$? $$\frac{M^k}{16\text{ms}} = \frac{(4M)^k}{256} \Rightarrow 4^k = \frac{256}{16} \Rightarrow k = 2$$ 5. Algorithm A runs in $O(N^3)$ time and Algorithm B solves the same problem in $O(N^2)$ time. If algorithm A takes 5 milliseconds to complete for an input size of 10, and algorithm B takes 20 milliseconds for an input size of 10, what is the input size that you expect the two algorithms to perform about the same? For algorithm A: $O(N^3)$ means execution time $\leq c_1 * N^3$ for $N = 10$ execution time is $5\text{ms} = c_1 * 10^3$ so, $c_1 = \frac{5}{10^3}$ For algorithm B: $O(N^2)$ means execution time $\leq c_2 * N^2$ for $N = 10$ execution time is $20\text{ms} = c_2 * 10^2$ so, $c_2 = \frac{20}{10^2}$ what is $N$ for which $c_1 * N^3 = c_2 * N^2$ Substitute for $c_1$ and $c_2$: $5/10^3 * N^3 = 20/10^2 * N^2$ $\Rightarrow N = 40$ For an \( O(N^3) \) algorithm, an instance with \( N = 512 \) takes 56 milliseconds. If you used a different-sized data instance and it took 7 milliseconds how large must that instance be? \[ \frac{512^3}{56ms} = \frac{N^3}{7ms} \implies N = 256 \]
{"Source-Url": "http://www.cs.ucf.edu/courses/cop3502/nihan/spr03/complexity.pdf", "len_cl100k_base": 5458, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 28889, "total-output-tokens": 6160, "length": "2e12", "weborganizer": {"__label__adult": 0.0003612041473388672, "__label__art_design": 0.00030517578125, "__label__crime_law": 0.0004727840423583984, "__label__education_jobs": 0.0005660057067871094, "__label__entertainment": 8.45193862915039e-05, "__label__fashion_beauty": 0.00016570091247558594, "__label__finance_business": 0.000240325927734375, "__label__food_dining": 0.0006251335144042969, "__label__games": 0.0012350082397460938, "__label__hardware": 0.0022029876708984375, "__label__health": 0.000583648681640625, "__label__history": 0.00030732154846191406, "__label__home_hobbies": 0.00017750263214111328, "__label__industrial": 0.0005831718444824219, "__label__literature": 0.0002875328063964844, "__label__politics": 0.0003228187561035156, "__label__religion": 0.0005373954772949219, "__label__science_tech": 0.049896240234375, "__label__social_life": 8.45193862915039e-05, "__label__software": 0.005126953125, "__label__software_dev": 0.9345703125, "__label__sports_fitness": 0.0004646778106689453, "__label__transportation": 0.0007758140563964844, "__label__travel": 0.0002363920211791992}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16784, 0.0468]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16784, 0.69689]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16784, 0.83024]], "google_gemma-3-12b-it_contains_pii": [[0, 562, false], [562, 1906, null], [1906, 3060, null], [3060, 4585, null], [4585, 5609, null], [5609, 6819, null], [6819, 7886, null], [7886, 8824, null], [8824, 9986, null], [9986, 10721, null], [10721, 11372, null], [11372, 12550, null], [12550, 14665, null], [14665, 16535, null], [16535, 16784, null]], "google_gemma-3-12b-it_is_public_document": [[0, 562, true], [562, 1906, null], [1906, 3060, null], [3060, 4585, null], [4585, 5609, null], [5609, 6819, null], [6819, 7886, null], [7886, 8824, null], [8824, 9986, null], [9986, 10721, null], [10721, 11372, null], [11372, 12550, null], [12550, 14665, null], [14665, 16535, null], [16535, 16784, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 16784, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 16784, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16784, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16784, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 16784, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16784, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16784, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16784, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 16784, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 16784, null]], "pdf_page_numbers": [[0, 562, 1], [562, 1906, 2], [1906, 3060, 3], [3060, 4585, 4], [4585, 5609, 5], [5609, 6819, 6], [6819, 7886, 7], [7886, 8824, 8], [8824, 9986, 9], [9986, 10721, 10], [10721, 11372, 11], [11372, 12550, 12], [12550, 14665, 13], [14665, 16535, 14], [16535, 16784, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16784, 0.13684]]}
olmocr_science_pdfs
2024-12-12
2024-12-12
6df0de113b3bf4bd4b8c8eb6290d19e54c3383fe
Acceleration of software execution time for operations involving sequences or matrices Oleksandr Mitsa, Yurii Horoshko, Serhii Vapnichnyi Abstract The article describes three methods for lowering program runtime that are solutions to computer science Olympiad problems involving sequences or matrices. The first method relies on the representation of some sequences as matrices, after which the program for calculating the sequence's members will have asymptotics equivalent to the exponentiation algorithm's time complexity and be $O(\log(n))$. The second strategy is to improve the existing code in order to significantly shorten program runtime. For scientists who create code for scientific inquiries and deal with matrix multiplication operations, understanding this approach is crucial. The author's challenge is presented and solved using the third strategy, which is based on minimizing temporal complexity by looking for regularities. Keywords: programming; olympiad tasks; sequence; matrix; C++. INTRODUCTION Sports programming has now become a promising intellectual sport. Every year, the number of pupils and students interested in Olympiads in computer science, as perhaps the most common type of sports programming, is growing. There are many Olympiads and other competitions held by the largest IT companies. Relevant HR specialists from these companies have been monitoring the results of various competitions and specific participants for many years. The most promising and successful participants are offered internships, combined with university study and the opportunity to gain full-time employment at the company after training. Also, many former Olympians are organizing successful projects related not only to programming and IT. Due to their participation in the Olympics, they were able to develop resistance to complex psychological stress. After spending so much time training, they have learned how to evaluate the likelihood of victory and defeat, have mastered existing and developed their own methods of dealing with stressful situations, the doubts and anxieties experienced by Olympic athletes in varying degrees. Participation in Olympiads, tournaments and other competitions help students to improve their skills (Zhukovsky, 2015). At first glance, it seems that to achieve solid results at the Olympics, it is enough to study a certain number of existing algorithms and theoretical material, and then only to successfully use them in competitions, leaving others no chance of winning. But it is not. Tasks in competitions are usually formulated in such a way that it is not enough to guess which algorithm to use to solve it. Almost always, in order to obtain a complete solution, it is necessary to upgrade the known algorithm, to supplement it, to combine several algorithms in one program, and to take some steps to reduce the time complexity of the solution (Horoshko, Mitsa, & Melnyk, 2019). This paper proposes three ways to reduce the runtime for computer science tasks that require the use of sequences and / or arrays: - performing calculations using a matrix representation of sequences; - reducing program execution time by using the features of the programming language; - reducing time complexity by looking for regularities. The first two techniques need to be learned to show the best results in standard situations. The third approach already needs a creative approach, has no general recommendations and is often used with the first two approaches. **PERFORMING CALCULATIONS USING A MATRIX REPRESENTATION OF SEQUENCES** Matrix data representation allows to use such algorithm as rapid exponentiation, that will significantly accelerate the program’s work to find the desired element. One of these sequences that can be written in matrix form is the second-order linear recurrent sequences named after Edward Luke. These are pairs of sequences \( \{U_n(P, Q)\} \) and \( \{V_n(P, Q)\} \), whose recurrence relationship is written as follows: \[ \begin{align*} U_0(P, Q) &= 0, U_1(P, Q) = 1, \\ U_{n+2}(P, Q) &= P \cdot U_{n+1}(P, Q) - Q \cdot U_n(P, Q), n \geq 0 \\ V_0(P, Q) &= 2, V_1(P, Q) = P, \\ V_{n+2}(P, Q) &= P \cdot V_{n+1}(P, Q) - Q \cdot V_n(P, Q), n \geq 0 \end{align*} \] (1) Partial variants of Luke’s sequences are well studied and have their own names. In particular, the sequence \( \{U_n(1, -1)\} \) is better known as the Fibonacci sequence, and the sequence \( \{U_n(2, -1)\} \) – as the Pell sequence. The Pell sequence is used to quickly find \( \sqrt{2} \), Pythagorean triples, etc. The Pell sequence numbers themselves in the ratio approach the silver intersection, similar to the Fibonacci sequence numbers in the ratio approach the gold intersection. Another known sequence is the sequence \( \{U_n(3, 2)\} \), which is called the Mersenne sequence. It is the numbers of this sequence that are the largest known prime numbers. The numbers of this sequence can be easily verified using the Luke-Lemmer test. They are also used to effectively construct long-period pseudorandom number generators called the Mersenne vortex. A slightly less well-known practical application, compared to the sequences discussed above, is the sequence \( \{U_n(1,-1)\} \), which is called the Jacobsthal sequence. Elements of this sequence are easy to find by different schemes. The most known is the recurrence ratio: \[ J_n = \begin{cases} 0, & n = 0; \\ 1, & n = 1; \\ J_{n-1} + 2J_{n-2}, & n > 1. \end{cases} \] (2) One can also use the following recursive records \[ J_{n+1} = 2J_n + (-1)^n; \\ J_{n+1} = 2^n - J_n. \] (3) (4) There is a known relation of the Jacobsthal sequence with the Pascal triangle (Barry, 2003). It consists in the rule of choosing in the line of the Pascal triangle certain numbers, the sum of which will be the number of the Jacobsthal sequence (Fig. 1). ![Pascal Triangle](image) Figure 1. Relation of the Jacobsthal sequence with the Pascal triangle This relation can be represented as a formula as follows: \[ J(n) = \sum_{(n+k) \mod 3=1} C(n,k) = \sum_{(n+k) \mod 3=2} C(n,k) \] (5) The Jacobsthal sequence is also used in the problem of convergence of certain centers of a triangle on the Eulerian line of an arbitrary triangle (Barry, 2003). The various relations that arise between Jacobsthal numbers are well explored in (Čerin, 2007). Our work below discusses a problem which has an effective solution that is based on the use of elements of the Jacobsthal sequence. Any sequence from the Luke family of sequences is easily represented in matrix form. For example, the Fibonacci sequence has a known matrix representation (Knuth, 2011): \[ \begin{pmatrix} F_{n+1} \\ F_n \end{pmatrix} = \begin{pmatrix} 1 & 1 \\ 1 & 0 \end{pmatrix}^n, \] (6) which can be overwritten as \[ \begin{pmatrix} F_{n+1} \\ F_n \end{pmatrix} = \begin{pmatrix} 1 & 1 \\ 1 & 2 \end{pmatrix} \begin{pmatrix} F_{n-2} \\ F_{n-1} \end{pmatrix}, \] or \[ \begin{pmatrix} F_{2n} \\ F_{2n+1} \end{pmatrix} = \begin{pmatrix} 1 & 1 \\ 1 & 2 \end{pmatrix}^n \begin{pmatrix} 1 \\ 1 \end{pmatrix}. \] Other sequences under consideration are specified similarly, and the time complexity of the program of finding members of the sequence will be equal to the time complexity of the exponentiation algorithm and will be \(O(\log(n))\). The ability to find directly the value of an element of the Fibonacci sequence by the formula \[ F_n = \left\lfloor \frac{\varphi^n}{\sqrt{5}} \right\rfloor \text{ or } F_n = \left\lfloor \frac{\varphi^n + 1}{2} \right\rfloor, \text{ where } \varphi = \frac{1 + \sqrt{5}}{2}, \quad \varphi = \frac{1 + \sqrt{5}}{2}, \] faces the problem of cumulative computational error and is of little use. On the other hand, some other sequences, such as the Jacobsthal sequence, have a convenient formula \[ J_n = \frac{2^n - (-1)^n}{3}. \] Also is known the formula writing to find elements of a Fibonacci sequence across a continuum of size \(n \times n\): \[ F_{n+1} = \text{det} \begin{vmatrix} 1 & 1 & 0 & \cdots & 0 \\ -1 & 1 & 1 & \cdots & 0 \\ 0 & -1 & 1 & \cdots & 0 \\ \vdots & \vdots & \ddots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & 1 \end{vmatrix}. \] If the \(n\)-th element of the sequence equals to the sum of \(k\) previous elements \[ A_n = A_{n-1} + A_{n-2} + \cdots + A_{n-k}, \] then such a sequence is written in the following matrix form \[ \begin{pmatrix} A_n \\ A_{n-1} \\ A_{n-2} \\ \vdots \\ A_{n-k+1} \end{pmatrix} = \begin{pmatrix} 111 & \cdots & 1 \\ 100 & \cdots & 0 \\ 010 & \cdots & 0 \\ \vdots & \vdots & \vdots \\ 000 & \cdots & 1 \end{pmatrix}^{n-k} \begin{pmatrix} A_k \\ A_{k-1} \\ A_{k-2} \\ \vdots \\ A_1 \end{pmatrix}. \] The matrix with the help of which the calculations will be made will be of dimension $k \times k$. Therefore, performing calculations to find members of sequences using the matrix form of their representation significantly reduces the time complexity of the corresponding algorithms. **REDUCING PROGRAM EXECUTION TIME BY USING THE FEATURES OF THE PROGRAMMING LANGUAGE** When solving a problem, it is very important to use the features of the programming languages in which the solution is implemented. In particular, let us take the well-known problem of multiplying two matrices. Consider two implementations of this operation. **Table 1** describes the finding of the product of matrices $C = A \cdot B$, all matrices of dimensions $n \times n$. If in the well-known variant the second and third cycles are swapped and the element of the first matrix is fixed in the usual variable, then the multiplication operation rate for matrices of dimension 1000 x 1000 will increase more than 15 (!) times. As the dimension increases, the advantage of the accelerated version will increase further. Further improvement steps are possible (Ermolaev, 2019), but they will not give such a tangible advantage. <table> <thead> <tr> <th>Table 1. Two implementations of multiplication of two matrices</th> </tr> </thead> <tbody> <tr> <td><strong>Well-known variant</strong></td> </tr> <tr> <td>for (int i = 0; i &lt; n; i++)</td> </tr> <tr> <td>for (int j = 0; j &lt; n; j++)</td> </tr> <tr> <td>for (int k = 0; k &lt; n; k++)</td> </tr> <tr> <td>c[i][j] += a[i][k] * b[k][j];</td> </tr> <tr> <td><strong>Accelerated variant</strong></td> </tr> <tr> <td>for (int i = 0; i &lt; n; i++)</td> </tr> <tr> <td>for (int k = 0; k &lt; n; k++) {</td> </tr> <tr> <td>long long x = a[i][k];</td> </tr> <tr> <td>for (int j = 0; j &lt; n; j++)</td> </tr> <tr> <td>c[i][j] += x * b[k][j];</td> </tr> <tr> <td>}</td> </tr> </tbody> </table> Scientific problems (Stetsyuk, 2014) also require the use of a matrix recording form for a particular model. Another recommendation is to keep the values that are constantly repeated in the calculation in a regular array. In particular, the values of trigonometric functions, if there is enough memory to store them, should be stored in the array, not re-calculated every time. This will significantly reduce the running time of the program. **REDUCING TIME COMPLEXITY BY LOOKING FOR REGULARITIES** Solving regularity search tasks often leads to the identification of known sequences. Consider the problem proposed by Oleksandr Mitsa at the 15th Open Student International Programming Olympiad named after S. O. Lebedev and V. M. Glushkov "KPI-OPEN 2018" ([http://kpi-open.org/](http://kpi-open.org/)). The title of this task is "Counter Racing." This task combines, under a completely new perspective, two tasks that are well known to the general public – the Joseph Flavius problem ([Graham, Knuth, & Patashnik, 1994](#)) and the task No 2808 taken from the well-known E-Olymp site ([The last number, 2013](#)), which is described as the Choriv counter. The task The legendary Shchek and Choriv decided to arrange a competition for their counters. Shchek counter was created on the base of the story of Josephus Flavius, when $N$ people are in a circle and every second person is taken out of the circle. The remaining person number will be the result of the counter. For example, when there are 5 people in a circle, people will be taken out in the order of their numbers – 2, 4, 1, 5 and the result will be number 3. Choriv’s counter was based on a completely different principle. He took the number $N$ and wrote out in a row all the numbers from 1 to $N$. Then he cross out the numbers that are in odd positions. Further, he lined them up anew, but then crossed out those that are in even positions. These actions were repeated until one number remained, which would be the result. For example, for $N=5$, the numbers with odd numbers – 1, 3, 5 are first crossed out, then from remaining numbers – 2, 4 – the number, which is in the even position, is crossed out, that is, 4. Therefore, the result will be 2. For the full objectivity of determining the winner, it was decided to compete counts for each natural value from 1 to $N$. If, as a result, for some value the result of the Shchek counter is greater than the result of the Choriv counter, the Shchek will receive one point, if less, one point will receive Choriv, in case of a draw – the current account will not change. It is need to determine the game score for a given number $N$. **Input format** Enter the number $N$ ($1< N <10^{18}$). **Output format** Display the score of the competition. <table> <thead> <tr> <th>Table 2. Example to the task</th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td>Standard input</td> <td></td> <td></td> </tr> <tr> <td>10</td> <td>3 6</td> <td></td> </tr> <tr> <td>100</td> <td>48 51</td> <td></td> </tr> </tbody> </table> **Note.** In the first example, the Shchek counter will win only at values 3, 5 and 7, at value 1 it will be a draw and in other cases the Choriv counter will win. Solution of the problem We first examine the regularities in the first counter. To do this, we use the scheme proposed in (Graham, Knuth, & Patashnik, 1994) and refine it. First, let’s consider how one can reduce the dimension of the problem twice with an even value of N. Table 3 shows that the dimension of the problem has decreased by 2 times and the formula for the transition from old to new values will look like \[ T(N) = 2 \cdot T \left( \frac{N}{2} \right) - 1. \] \hspace{1cm} (14) Table 3. Simulation of Joseph Flavius problem with an even value of N <table> <thead> <tr> <th></th> <th>1</th> <th>2</th> <th>3</th> <th>4</th> <th>5</th> <th>6</th> <th>7</th> <th>8</th> <th>9</th> <th>10</th> </tr> </thead> <tbody> <tr> <td></td> <td>1</td> <td>2</td> <td>3</td> <td>4</td> <td>5</td> <td>4</td> <td>3</td> <td>2</td> <td>2</td> <td>1</td> </tr> </tbody> </table> For an odd value of N we use the same scheme and note that the value 1 in this case will never be a solution (Table 4). Table 4. Simulation of Joseph Flavius problem with an odd value of N <table> <thead> <tr> <th></th> <th>1</th> <th>2</th> <th>3</th> <th>4</th> <th>5</th> <th>6</th> <th>7</th> <th>8</th> <th>9</th> <th>10</th> <th>11</th> </tr> </thead> <tbody> <tr> <td></td> <td>1</td> <td>2</td> <td>3</td> <td>4</td> <td>5</td> <td>4</td> <td>3</td> <td>2</td> <td>2</td> <td>2</td> <td>1</td> </tr> </tbody> </table> Again we see that the dimension of the problem has decreased by 2 times and it is easy to deduce the formula of transition from old to new values, which will look like \[ T(N) = 2 \cdot T \left( \frac{N}{2} \right) + 1. \] \hspace{1cm} (15) To summarize, we give a complete scheme of recalculation \[ T(N) = \begin{cases} 1, & \text{if } N = 1; \\ 2 \cdot T \left( \frac{N}{2} \right) - 1, & \text{if } N \text{ even}; \\ 2 \cdot T \left( \frac{N}{2} \right) + 1, & \text{if } N \text{ odd}. \end{cases} \] \hspace{1cm} (16) But even this formula is not enough to solve the problem as a whole. Therefore, we write the solutions for both counters for values N from 1 to 30 (Table 5). From Table 5, one can make the following observation: in the first counter, for N, which is a degree of two, always the answer is 1, and for the following N the answer is incremented by 2. That is, if the closest to N degree of two is equal \(2^k\), then the answer is easily determined by formula \[ T(N) = 1 + 2(N - 2^k) \] \hspace{1cm} (17) Table 5. **Tables for preliminary research** <table> <thead> <tr> <th>N</th> <th>1</th> <th>2</th> <th>3</th> <th>4</th> </tr> </thead> <tbody> <tr> <td>Counter 1</td> <td>1</td> <td>1</td> <td>3</td> <td>1</td> </tr> <tr> <td>Counter 2</td> <td>1</td> <td>2</td> <td>2</td> <td>2</td> </tr> <tr> <td>Winner</td> <td>Draw</td> <td>Choriv</td> <td>Shchek</td> <td>Choriv</td> </tr> <tr> <td>N</td> <td>5</td> <td>6</td> <td>7</td> <td>8</td> </tr> <tr> <td>Counter 1</td> <td>3</td> <td>5</td> <td>7</td> <td>1</td> </tr> <tr> <td>Counter 2</td> <td>2</td> <td>6</td> <td>6</td> <td>6</td> </tr> <tr> <td>Winner</td> <td>Shchek</td> <td>Choriv</td> <td>Shchek</td> <td>Shchek</td> </tr> <tr> <td>N</td> <td>9</td> <td>10</td> <td>11</td> <td>12</td> </tr> <tr> <td>Counter 1</td> <td>3</td> <td>5</td> <td>7</td> <td>9</td> </tr> <tr> <td>Counter 2</td> <td>6</td> <td>6</td> <td>6</td> <td>6</td> </tr> <tr> <td>Winner</td> <td>Choriv</td> <td>Choriv</td> <td>Shchek</td> <td>Shchek</td> </tr> <tr> <td>N</td> <td>13</td> <td>14</td> <td>15</td> <td>16</td> </tr> <tr> <td>Counter 1</td> <td>11</td> <td>13</td> <td>15</td> <td>1</td> </tr> <tr> <td>Counter 2</td> <td>6</td> <td>6</td> <td>6</td> <td>6</td> </tr> <tr> <td>Winner</td> <td>Shchek</td> <td>Shchek</td> <td>Shchek</td> <td>Choriv</td> </tr> <tr> <td>N</td> <td>17</td> <td>18</td> <td>19</td> <td>20</td> </tr> <tr> <td>Counter 1</td> <td>3</td> <td>5</td> <td>7</td> <td>9</td> </tr> <tr> <td>Counter 2</td> <td>6</td> <td>6</td> <td>6</td> <td>6</td> </tr> <tr> <td>Winner</td> <td>Choriv</td> <td>Choriv</td> <td>Shchek</td> <td>Shchek</td> </tr> <tr> <td>N</td> <td>21</td> <td>22</td> <td>23</td> <td>24</td> </tr> <tr> <td>Counter 1</td> <td>11</td> <td>13</td> <td>15</td> <td>17</td> </tr> <tr> <td>Counter 2</td> <td>6</td> <td>22</td> <td>22</td> <td>22</td> </tr> <tr> <td>Winner</td> <td>Shchek</td> <td>Choriv</td> <td>Choriv</td> <td>Choriv</td> </tr> <tr> <td>N</td> <td>25</td> <td>26</td> <td>27</td> <td>28</td> </tr> <tr> <td>Counter 1</td> <td>19</td> <td>21</td> <td>23</td> <td>25</td> </tr> <tr> <td>Counter 2</td> <td>22</td> <td>22</td> <td>22</td> <td>22</td> </tr> <tr> <td>Winner</td> <td>Choriv</td> <td>Choriv</td> <td>Shchek</td> <td>Shchek</td> </tr> <tr> <td>N</td> <td>29</td> <td>30</td> <td>31</td> <td>32</td> </tr> <tr> <td>Counter 1</td> <td>27</td> <td>29</td> <td>31</td> <td>1</td> </tr> <tr> <td>Counter 2</td> <td>22</td> <td>22</td> <td>22</td> <td>22</td> </tr> <tr> <td>Winner</td> <td>Shchek</td> <td>Shchek</td> <td>Shchek</td> <td>Choriv</td> </tr> </tbody> </table> Also note that the number of values of the degree of two for the input values from 1 to $10^{18}$ is only 60. Let us proceed to the analysis of the second counter. Table 5 shows that the number of answers is negligible. Moreover, it can be seen that for 1 the answer will be 1, then from 2 to 5 the answer will be 2, and from 22 and to the next value to be investigated – the answer will be 22. Let’s simulate this task and write down the values of the answers that occur in it. These will be the following values 1, 2, 6, 22, 86, 342, 1366, 5462, 21846, 87382, 349526, 1398102, 5592406, 22369622,... With these values in front of you, it is easy to determine the scheme of their calculation \[ P(k) = 4^*P(k-1) - 2, \text{ where } P(1) = 1. \] \hspace{1cm} (18) It is also understood that in the interval \([P(k), P(k+1)-1]\) the answer will be \(P(k)\). Moreover, there will be very few such values. So, in the interval from 1 to \(10^{18}\) there will be only 31 of them. Thus, one of the schemes of the solution could be the following. At each interval from the number \(N = 2^k+1\) to \(N = 2^{k+1}\) we see how many numbers of the sequence 2 it contains, and accordingly we consider this when forming the account together with the sequence 1. Of course, the last interval will go only to the number \(N\). The described option will have the following solution in C++ programming language: ```cpp #include <iostream> using namespace std; int main() { long long N; cin >> N; long long p = 1, q = 2, Choriv = 0; while (2 * p + 1 <= N) { p = 2 * p + 1; if (p > 4 * q - 2) { long long pp = p; while (pp > 4 * q - 3) pp = (pp - 1) / 2; pp = 2 * (4 * q - 3 - pp) - 1; if (pp > q) Choriv += (pp - q) / 2 + 1; q = 4 * q - 2; } Choriv += (p - q) / 2 + 1; } if (N >= 4 * q - 2) { long long pp = p; while (pp > 4 * q - 3) pp = (pp - 1) / 2; pp = 2 * (4 * q - 3 - pp) - 1; if (pp > q) Choriv += (pp - q) / 2 + 1; q = 4 * q - 2; } p = 2 * (N - p) - 1; if (p > q) Choriv += (p - q) / 2 + 1; cout << Choriv << " " << N - Choriv - 1 << endl; return 0; } ``` But if to continue the research, one can get a simpler way of solving the problem under consideration. Note that in this game only when \( N = 1 \), both players will draw. With all other values, \( N \) wins either the first or the second. So let’s translate the game results to 0-1 form. Write the sequence in which the \( i \)-th element is 1 if the second player wins and 0 - otherwise, starting with the game for \( N = 2 \): \[ 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, \] Then we write down the quantities of consecutive numbers: \[ 1, 1, 1, 1, 1, 3, 5, 3, 5, 5, 11, 21, 11, 11, 21, 21, ... \] The next step is to divide this sequence into blocks of 6 elements and notice that each block consists of two sequential Jacobshtal numbers \[ 1, 1, 3, 5, 11, 21, 43, 86, 171, ... \] That is, the new scheme for solving this problem will be as follows. Each time, by choosing two values of the Jacobshtal sequence, we will form a current account. Of course, in forming the final account we will use only those values in the last six, which are limited by the number of lots \( N \). For example, we calculate the score for the number of lots \( N = 40 \). Note that this will require the use of the first two sixths and part of the first value of the third sixth. The score after 31 games will be described by summing the elements of the first two sixes. To count the number of games won by the first player, we count the items in even positions \( 1 + 1 + 1 + 5 + 3 + 5 = 16 \), and to count the number of games won by the second player, we count the items in odd positions \( 1 + 1 + 1 + 3 + 3 + 5 = 14 \). Then we consider that the third six starts with 11 second player victories, of which we need to count 9. That is, the final score will be 16:23 in favor of the second player. The described solution in C++ programming language will be quite simple and compact: ```cpp #include <iostream> using namespace std; long long J[61], N, Choriv, Shchek, rem; void Score (long long &X, long long Y) { if (rem > Y) { X += Y; rem -= Y; } else { X += rem; rem = 0; } } ``` int main() { cin >> N; int i = 1; while(Shchek+Choriv+3*(J[i]+J[i+1])<N) { Shchek += 2 * J[i] + J[i + 1]; Choriv += J[i] + 2 * J[i + 1]; i += 2; J[i] = J[i - 1] + 2 * J[i - 2]; J[i + 1] = J[i] + 2 * J[i - 1]; } rem = N - 1 - Shchek - Choriv; Score(Shchek, J[i]); Score(Choriv, J[i + 1]); Score(Shchek, J[i]); Score(Choriv, J[i]); Score(Shchek, J[i + 1]); Score(Choriv, J[i + 1]); cout << Choriv << " " << Shchek; return 0; } It should be noted that in this task the number of involved Jakobshtal numbers does not exceed 60. The Score procedure allows to realize an ending of the task when the last block is not fully used. Thus, the considered problem (of increased complexity) used in the international competition is a combination of two well-known tasks under a completely new perspective that has not been used before. Two variants of its solution are presented. In a more flexible and efficient way, it was enough to find the regularity given by the elements of the Jacobsthal sequence. **CONCLUSIONS** This paper discusses three approaches to reducing solution execution times for computer science tasks that require some knowledge of sequences and / or arrays to solve them. The first approach is to write the sequence in the matrix form and then use the rapid matrix exponentiation. This allows to quickly identify a particular element of a sequence. The second approach is essentially to improve the code of the program, which is considered traditional, and can significantly speed up the program, significantly (for matrices 1000 by 1000 more than 15 times) reducing the time of finding matrices using a fairly simple method. This approach is tested for C++, the most popular sports programming language. It is effective in solving sports programming tasks, because in this area, rapid methods of matrix multiplication are rarely used due to the excessive size of their code. It is also very important to know for scientists who write code for scientific researches and are faced with matrix multiplication operations. To demonstrate the third approach, we present a rather complex authorial task and show that its solution can be based on finding members of the well-known Jacobsthal sequence. The approaches presented in the paper can be used both individually and in combination. The work will be interesting to pupils, students and teachers interested in programming, especially sports, and for scientists who write code for scientific researches. REFERENCES About the authors: Oleksandr Mitsa, Uzhhorod National University, Uzhhorod, Ukraine. ORCID: [https://orcid.org/0000-0002-6958-0870](https://orcid.org/0000-0002-6958-0870), alex.mitsa@gmail.com Yurii Horoshko, T. H. Shevchenko National University "Chernihiv Colehium", Chernihiv, Ukraine. ORCID: [https://orcid.org/0000-0001-9290-7563](https://orcid.org/0000-0001-9290-7563), horoshko_y@ukr.net Serhii Vapnichny, Uzhhorod National University, Uzhhorod, Ukraine. ORCID: [https://orcid.org/0000-0001-8131-0884](https://orcid.org/0000-0001-8131-0884), svapnichny@gmail.com
{"Source-Url": "https://uesit.org.ua/index.php/itse/article/download/388/257", "len_cl100k_base": 8119, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 34240, "total-output-tokens": 8829, "length": "2e12", "weborganizer": {"__label__adult": 0.0004513263702392578, "__label__art_design": 0.0005583763122558594, "__label__crime_law": 0.0006475448608398438, "__label__education_jobs": 0.0043182373046875, "__label__entertainment": 0.0001232624053955078, "__label__fashion_beauty": 0.00024366378784179688, "__label__finance_business": 0.0004572868347167969, "__label__food_dining": 0.0007853507995605469, "__label__games": 0.0012712478637695312, "__label__hardware": 0.0018625259399414065, "__label__health": 0.0010890960693359375, "__label__history": 0.0004837512969970703, "__label__home_hobbies": 0.0002446174621582031, "__label__industrial": 0.000972270965576172, "__label__literature": 0.0003893375396728515, "__label__politics": 0.0005121231079101562, "__label__religion": 0.000751495361328125, "__label__science_tech": 0.18798828125, "__label__social_life": 0.0001926422119140625, "__label__software": 0.0058135986328125, "__label__software_dev": 0.78857421875, "__label__sports_fitness": 0.00077056884765625, "__label__transportation": 0.0011281967163085938, "__label__travel": 0.00029206275939941406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26363, 0.06239]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26363, 0.38358]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26363, 0.88232]], "google_gemma-3-12b-it_contains_pii": [[0, 2387, false], [2387, 4963, null], [4963, 6767, null], [6767, 8686, null], [8686, 11461, null], [11461, 13991, null], [13991, 16090, null], [16090, 18009, null], [18009, 19802, null], [19802, 21842, null], [21842, 24112, null], [24112, 26363, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2387, true], [2387, 4963, null], [4963, 6767, null], [6767, 8686, null], [8686, 11461, null], [11461, 13991, null], [13991, 16090, null], [16090, 18009, null], [18009, 19802, null], [19802, 21842, null], [21842, 24112, null], [24112, 26363, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26363, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26363, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26363, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26363, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26363, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26363, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26363, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26363, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26363, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26363, null]], "pdf_page_numbers": [[0, 2387, 1], [2387, 4963, 2], [4963, 6767, 3], [6767, 8686, 4], [8686, 11461, 5], [11461, 13991, 6], [13991, 16090, 7], [16090, 18009, 8], [18009, 19802, 9], [19802, 21842, 10], [21842, 24112, 11], [24112, 26363, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26363, 0.17109]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
cf091c27f1b69344fe0cb0f97e9b680d611ebc0d
How to Execute a Conditional?* Richard B. Scherl Department of Computer and Information Science New Jersey Institute of Technology Newark, New Jersey 07102 scherl@vienna.njit.edu Yves Lespérance Department of Computer Science Glendon College York University 2275 Bayview Ave. Toronto, ON, Canada M4N 3M6 lesperan@cs.toronto.edu Abstract The execution of a plan containing conditionals by an agent with incomplete knowledge poses some difficult problems. In order for the condi- tional to be meaningful, the agent must know whether or not the condition is true at execu- tion time. This paper proposes one solution to this problem by integrating sensing actions into GOLOG, a high-level robot programming lan- guage. At run time, the interpreter performs a small amount of planning to ensure that the agent will know whether or not a condition is true prior to the point where the test for the truth of the condition needs to be made. Introduction Artificial agents, be they robots or software agents, need to be designed to achieve their goals in a world about which the agents have incomplete knowledge. Our approach to the design of such agents is to develop a high level language, called GOLOG (Levesque et al. 1996; Lespérance et al. 1995; 1994), to specify such agents. Programs written in GOLOG can be seen as schematic plans with the details automatically filled in at execution time. GOLOG programs are composed in a way similar to conventional high-level computer programs, however it has a semantics grounded in the situation calculus (Reiter 1991). The output of the GOLOG interpreter is a sequence of primitive actions expressed in the situation calculus. Unlike conventional computer programs, GOLOG programs frequently need to work under incomplete knowledge. Consider the conditional “if C then α”. Clearly, to execute the command, the agent needs to know whether C is true. This presents no conceptual problem if the agent has complete knowledge, which is the assumption made by classical planners and compil- ers for traditional computer programs. Relaxing this assumption exposes many difficult problems, some of which have been discussed by Etzioni et al (1992). For instance, should the agent ask its sensors first or should it check its knowledge base first? In addition to the sensory and mental actions, what other actions is the agent is allowed to do? Levesque(1996) discusses the limitations of the classical definition of planning and generalizes the definition to cover cases where the agent has incomplete knowledge of the initial situation and can execute sensing actions. In this paper we adopt a version of the situ- ation calculus with a representation of knowledge and knowledge-producing actions(Scherl and Levesque 1993) as the semantic foundation for GOLOG. Given the conditional “if C then α” to execute in state s, the agent strives to achieve a state in which it knows whether C is true in s. Taken as a planning prob- lem, this differs from classical planning in the following ways1: 1. The goal is epistemic. (To have the knowledge of whether something is true.) 2. It involves more than one state. (To achieve a state where the truth value of something in an earlier state comes to light.) Now the output of the GOLOG interpreter is a se- quence of basic situation calculus actions that may in- clude sensing actions. There may be many ways to achieve the goal of knowing whether C is true. The achieving of the goal of knowing the truth value of --- 1It is interesting to note here that because of the second feature, classical planning formalisms such as STRIPS are no longer expressive enough, and we have to take seriously formalisms that represent states explicitly. a condition is an additional element of the high-level schematic nature of GOLOG plans that are filled in with the details at execution time, e.g., the particular sequence of actions (including sensing actions) needed to ensure that the agent will know whether or not the condition is true at the time the test needs to be made. The following problem (based on an example due originally to Savage and then modified by Poole) will be used to illustrate the approach taken here2: The problem is to make a 3 egg omelette from a set of eggs some of which may be bad. None of the eggs in the omelette should be bad. We have two bowls; we can only see if an egg is bad if it is in a bowl. We can throw out the whole bowl. We can assume a limited number of eggs (say 5), and add the statement that there are at least 3 good eggs. Furthermore, the agent has two methods of determining whether or not the egg is bad—visual and olfactory. The following are the actions available to the agent: - Break an egg into a bowl. - Pour the contents of one bowl into another. - Throw out the contents of one bowl. - Visually inspect a bowl to see if there are any bad eggs in it. - Sniff a bowl to see if there are any bad eggs in it. The goal is to: Have three eggs in a bowl that are not bad. In the next two sections, the situation calculus background and then the GOLOG programming language are discussed. The addition of knowledge-producing actions to the situation calculus and the needed revisions to the GOLOG interpreter are covered in the following two sections. The Situation Calculus: A Language for Specifying Dynamics The situation calculus (following the presentation in (Reiter 1991)) is a first-order language for representing dynamically changing worlds in which all of the changes are the result of named actions performed by some agent. For example: \begin{verbatim} BREAK_INTO(bowl), FETCH(container), POUR(bowl1, bowl2), THROW_OUT(bowl) \end{verbatim} Terms are used to represent states of the world—i.e. situations. If $\alpha$ is an action and $s$ a situation, the result of performing $\alpha$ in $s$ is represented by $do(\alpha, s)$. The constant $S_0$ is used to denote the initial situation. Relations whose truth values vary from situation to situation, called fluents, are denoted by predicate symbols taking a situation term as the last argument. For example, Broken(x, s) means that object $x$ is broken in situation $s$. Functions whose denotations vary from situation to situation are called functional fluents. They are denoted by function symbols with an extra argument taking a situation term, as in Number_Eggs(bowl, s), i.e., the number of eggs in bowl in $s$. In the omelette example, the following fluents are needed: \begin{verbatim} IN(egg, bowl, s), BAD(egg, s), BROKEN(egg, s), HOLDING(egg, s), NUMBER_EGGS(bowl, s) \end{verbatim} The following non-fluents are needed: \begin{verbatim} EGG(x), SMALL_BOWL, LARGE_BOWL, BASKET \end{verbatim} It is assumed that the axiomatizer has provided for each action $\alpha(x)$, an action precondition axiom of the form given in 1, where $\pi_\alpha(s)$ is a formula specifying the preconditions for action $\alpha(x)$. \textbf{Action Precondition Axiom} \begin{equation} \text{POSS}(\alpha(x), s) \equiv \pi_\alpha(x, s) \tag{1} \end{equation} An action precondition axiom for the action BREAK_INTO is given below. \begin{verbatim} POSS(BREAK_INTO(bowl), s) \equiv \\ \exists \text{egg} \neg \text{BROKEN(egg, s)} \land \text{HOLDING(egg, s)} \end{verbatim} The predicate POSS allows us to define situations reachable by an executable sequence of actions. Intuitively, $s \leq s'$ holds if and only if there is a sequence of zero or more executable actions which lead from situation $s$ to $s'$. An action is executable if the action’s preconditions are true in the situation in which the action is to be performed. We need the following axioms\textsuperscript{3}: \begin{equation} \neg s < S_0 \tag{3} \end{equation} \begin{equation} s < do(a, s') \equiv (\text{POSS}(a, s') \land s \leq s') \tag{4} \end{equation} where $s \leq s'$ is shorthand for $s < s' \lor s = s'$. \textsuperscript{3}The full set of foundational axioms for the situation calculus can be found in (Lin and Reiter 1994). These are extended to cover the situation calculus with knowledge in (Scherl 1996a). Furthermore, it is assumed that the axiomatizer has provided for each fluent F, two general effect axioms of the form given in 5 and 6. **General Positive Effect Axiom for Fluent F** \[ \text{Poss}(a, s) \land \gamma^+_F(x, a, s) \rightarrow F(\text{do}(x, a, s)) \] (5) **General Negative Effect Axiom for Fluent F** \[ \text{Poss}(a, s) \land \gamma^-_F(x, a, s) \rightarrow \neg F(\text{do}(x, a, s)) \] (6) Here \(\gamma^+_F(a, s)\) is a formula describing under what conditions doing the action \(a\) in situation \(s\) leads the fluent \(F\) to become true in the successor situation \(\text{do}(a, s)\), and similarly \(\gamma^-_F(a, s)\) is a formula describing the conditions under which performing action \(a\) in situation \(s\) results in the fluent \(F\) becoming false in situation \(\text{do}(a, s)\). Effect axioms provide the "causal laws" for the domain of application. Reiter(1991) shows how to derive a set of successor state axioms of the form given in 7 from the axioms (positive and negative effect) and a completeness assumption. **Successor State Axiom** \[ \text{Poss}(a, s) \rightarrow [F(\text{do}(a, s))] \equiv \\ \gamma^+_F(x, a, s) \lor (F(x, s) \land \neg \gamma^-_F(x, a, s)) \] (7) Similar successor state axioms may be written for functional fluents. A successor state axiom is needed for each fluent \(F\), and an action precondition axiom is needed for each action \(a\). The following are successor state axioms for the fluents \text{BROKEN} and \text{IN}: \[ \text{Poss}(a, s) \rightarrow [\text{BROKEN}(e, \text{do}(a, s))] \equiv \\ \text{Poss}(e, s) \land \exists b \ a = \text{BREAK INTO}(b) \lor \\ \text{BROKEN}(e, s) \] (8) \[ \text{Poss}(a, s) \rightarrow [\text{IN}(e, b_1, \text{do}(a, s))] \equiv \\ \text{Poss}(e, s) \land \exists b \ a = \text{BREAK INTO}(b_1) \lor \\ \text{Poss}(e, s) \land \exists b_2 \text{POUR}(b_2, b) \land \text{IN}(e, b_2, s) \lor \\ \text{Poss}(e, s) \land \exists b_2 \text{THROW OUT}(b_2) \lor \\ \exists b_2 \ a = \text{POUR}(b, b_2) \] (9) The axioms specify completely all possible ways that the truth value of the fluents can change in moving from situation \(s\) to situation \(\text{do}(a, s)\). --- **GOLOG: Adding complex actions to the situation calculus** Actions in the situation calculus are primitive and deterministic. They are like primitive computer instructions (e.g. assignment). We need complex actions for the same reason that we need programs. This set of complex action expressions forms a programming language that we call GOLOG (aGOl in LOGic). Complex actions could be treated as first class entities, but since the tests that appear in forms like if \(\phi\) then \(\delta_1\) else \(\delta_2\) involve formulas \(\phi\), this means that we must reify fluents and formulas. Moreover, it is necessary to axiomatize the correspondence between these reified formulas and the actual situation calculus formulas. This results in a much more complex theory. Instead we treat complex action expressions as abbreviations for expressions in the situation calculus logical language. They may be thought of as macros that expand into the genuine logical expressions. A particular execution sequence of a complex action expression will be a sequence of situation calculus primitive actions. In this way the solution to the frame problem (for primitive actions) is extended to complex actions as well, since the complex actions are eliminated by macro expansion. This is done by defining a predicate \(\text{Do}\) as in \(\text{Do}(\delta, s, s')\) where \(\delta\) is a complex action expression. \(\text{Do}(\delta, s, s')\) is intended to mean that the agent's doing action \(\delta\) in situation \(s\) leads to a (not necessarily unique) situation \(s'\). The inductive definition of \(\text{Do}\) includes the following cases: - \(\text{Do}(a, s, s') \equiv \text{Poss}(a, s) \land s' = \text{do}(a, s)\) — simple actions - \(\text{Do}(\phi?, s, s') \equiv \phi[s] \land s = s'\) — tests - \(\text{Do}[(\delta_1; \delta_2), s, s'] \equiv \exists s''((\text{Do}(\delta_1, s, s'') \land \text{Do}(\delta_2, s'', s')))\) — sequences - \(\text{Do}[(\delta_1; \delta_2), s, s'] \equiv \text{Do}(\delta_1, s, s') \lor \text{Do}(\delta_2, s, s')\) — nondeterministic choice of actions - \(\text{Do}((\Pi\delta)\delta, s, s') \equiv \exists x \text{Do}(\delta, s, s')\) — nondeterministic choice of parameters - \(\text{Do}(\text{if} \ \phi \ \text{then} \ \delta_1 \ \text{else} \ \delta_2, s, s') \equiv \phi[s] \land \text{Do}(\delta_1, s, s') \land \neg \phi[s] \land \text{Do}(\delta_2, s, s')\) — nondeterministic iteration \[\forall P(Vs_1 P(s_1, s_1) \supset \forall s_2 s_3 [P(s_1, s_2) \land \text{Do}(\delta, s_2, s_3) \supset P(s_1, s_3)] \supset P(s, s')\] • Do(\textbf{while } \phi \textbf{ do } \delta, s, s') \overset{\text{def}}{=} \\ \forall P(\\n\quad \forall s_1 -\phi[\delta] \rightarrow P(s_1, s_1) \\ \quad \forall s_1, s_2, s_3 (\phi[\delta] \land Do(A, s_1, s_2) \land P(s_2, s_3)) \\ \quad \rightarrow P(s_1, s_3)) \\ \quad \rightarrow P(s, s') Additionally, the notation $\phi[s]$ means that a situation argument is added to all fluents in $\phi$, if one is missing. The definition of while loops could be simplified by utilizing the definition of nondeterministic iteration. Additionally, there is the following abbreviation: \[ \text{ACHIEVE}(\phi) \overset{\text{def}}{=} \{ A_1 \mid A_2 \mid \ldots \mid A_n \} \phi \] A possible GOLOG program for the omelette example\(^5\) is as follows: \textbf{while } \neg (\text{NUMBER_EGGS(LARGE_BOWL)} = 3) \\ \textbf{(If/Then)} \text{ACHIEVE}(\text{HOLDING}(e)); \\ \text{BREAK_INTO(SMALL_BOWL)}; \\ \textbf{if } \text{BAD(SMALL_BOWL)} \\ \textbf{then} \text{THROW_OUT(SMALL_BOWL)}; \\ \textbf{else} \text{POUR(SMALL_BOWL, BIG_BOWL)}; The problem that is being addressed in this paper is how to ensure that the agent knows the truth value of $\text{BAD}(\text{SMALL_BOWL})$ each time that the condition needs to be evaluated. \section*{Adding Knowledge and Perceptual Actions} To model the effects of perceptual actions, we must come up with a suitable formalization of knowledge. The approach we take is to adapt the standard possible-world model of knowledge to the situation calculus, as first done by Moore(1980). Informally, we think of there being a binary accessibility relation over situations, where a situation $s'$ is understood as being accessible from a situation $s$ if as far as the agent knows in situation $s$, he might be in situation $s'$. So something is known in $s$ if it is true in every $s'$ accessible from $s$, and conversely something is not known if it is false in some accessible situation. To treat knowledge as a fluent, we introduce a binary relation $K(s', s)$, read as "$s'$ is accessible from $s$" and treat it the same way we would any other fluent. In other words, from the point of view of the situation calculus, the last argument to $K$ is the official situation argument (expressing what is known in situation $s$), and the first argument is just an auxiliary like the $y$ in $\text{BROKEN}(y, s)$\(^6\). We can now introduce the notation $\text{Knows}(P, s)$ (read as $P$ is known in situation $s$) as an abbreviation for a formula that uses $K$. For example \[ \text{Knows}(\text{BROKEN}(y), s) \overset{\text{def}}{=} \forall s' K(s', s) \rightarrow \text{BROKEN}(y, s'). \] Note that this notation supplies the appropriate situation argument to the fluent on expansion (and other conventions are certainly possible). For the case of equality literals the convention is to supply the situation argument to each non-variable argument of the equality predicate. For example: \[ \text{Knows}(\text{NUMBER(BILL) = NUMBER(MARY)}, s) \overset{\text{def}}{=} \forall s' K(s', s) \rightarrow \text{NUMBER(BILL, s') = NUMBER(MARY, s')}. \] This notation can be generalized inductively to arbitrary formulas. Turning now to knowledge-producing actions, there are two sorts of actions to consider: actions whose effect is to make known the truth value of some formula, and actions to make known the value of some term. A discussion of the second case may be found in (Scherl and Levesque 1993). In the first case, we might imagine a \text{SENSE}_{P} action for a fluent $P$, such that after doing a \text{SENSE}_{P}, the truth value of $P$ is known. We introduce the notation $\text{Kwhether}(P, s)$ as an abbreviation for a formula indicating that the truth value of a fluent $P$ is known. \[ \text{Kwhether}(P, s) \overset{\text{def}}{=} \text{Knows}(P, s) \lor \text{Knows}(\neg P, s), \] It will follow from our specification that $\text{Kwhether}(P, \text{do}(\text{SENSE}_{P}, s))$. The specifications of both \text{INSPECT} and \text{SNIFF} are similar to \text{SENSE}_{P}. The approach being developed here rests on the specification of a successor state axiom for the $K$ relation. For all situations $\text{do}(a, s)$, the $K$ relation will be completely determined by the $K$ relation at $s$ and the action $a$. For non-knowledge-producing actions (e.g. \text{BREAK_INTO}(p)), the specification (based on Moore (1980; 1985)) is as follows: \[ \text{Poss} (\text{BREAK_INTO}(p), s) \rightarrow \\ \quad [K(s'', \text{do}(\text{BREAK_INTO}(p), s)) \equiv \\ \quad \exists s' (K(s', s) \land (s'' = \text{do}(\text{BREAK_INTO}(p), s')))]] \overset{\text{(10)}}{=} \\ \] The idea here is that as far as the agent at world $s$ knows, he could be in any of the worlds $s'$ such \(^5\)Here \text{ACHIEVE}(\text{HOLDING}(e)) will be instantiated by the sequence of actions that enable the agent to pick up some available egg. These low-level actions have not been defined in this paper. \(^6\)Note that using this convention means that the arguments to $K$ are reversed from their normal modal logic use. that $K(s',s)$. At $do(\text{break\_into}(p),s)$ as far as the agent knows, he can be in any of the worlds $do(\text{break\_into}(p),s')$ for any $s'$ such that $K(s',s)$. So the only change in knowledge that occurs in moving from $s$ to $do(\text{break\_into}(p),s)$ is the knowledge that the action $\text{break\_into}$ has been performed. Now consider the simple case of the knowledge-producing action $\text{inspect}$ that determines whether or not the fluent $\text{bad}$ is true (following Moore (1980; 1985)). \begin{equation} \text{poss(\text{inspect}(b),s) } \rightarrow \\ [K(s'',do(\text{inspect}(b),s)) \equiv \exists s' (K(s',s) \land (s'' = do(\text{inspect}(b),s') \land (\text{bad}(b,s) \equiv \text{bad}(b,s'))))] \end{equation} \(11\) Again, as far as the agent at world $s$ knows, he could be in any of the worlds $s'$ such that $K(s',s)$. At $do(\text{inspect}(b),s)$ as far as the agent knows, he can be in any of the worlds $do(\text{inspect}(b),s')$ for all $s'$ such that $K(s',s)$ and $\text{bad}(s) \equiv \text{bad}(s')$. The idea here is that in moving from $s$ to $do(\text{inspect}(b),s)$, the agent not only knows that the action $\text{inspect}(b)$ has been performed (as above), but also the truth value of the predicate $\text{bad}$. Observe that the successor state axiom for $\text{bad}$ guarantees that $\text{bad}$ is true at $do(\text{inspect}(b),s)$ iff $\text{bad}$ is true at $s$, and similarly for $s'$ and $do(\text{inspect}(b),s')$. Therefore, $\text{bad}$ has the same truth value in all worlds $s''$ such that $K(s'',do(\text{inspect}(b),s))$, and so $K\text{whether}(\text{bad},do(\text{inspect}(b),s))$ is true. The axiomatization for $\text{sniff}(b)$ is exactly the same. The two actions would likely differ in their possibility conditions. For example, the axiomatization of $\text{poss(\text{inspect}(b))}$ may require that the lighting be adequate, while the axiomatization of $\text{poss(\text{inspect}(b))}$ may require that the agent not have a cold. In the omelette problem, there are two knowledge-producing actions. In general, there may be many. Associated with each knowledge-producing action $\alpha_i$ is a formula $\varphi_i(s,s')$. The form of the successor state axiom for $K$ is then as follows: **Successor State Axiom for $K$** \begin{equation} \forall s,s'', K(s'',do(a,s)) \equiv \\ [3 s' (K(s',s) \land (s'' = do(a,s'))) \land \\ ((a = \alpha_1) \rightarrow \varphi_1) \land \\ \vdots \\ ((a = \alpha_n) \rightarrow \varphi_n))] \end{equation} The relation $K$ at a particular situation $do(a,s)$ is completely determined by the relation at $s$ and the action $a$. In (Scherl and Levesque 1993) it is argued that this formulation provides a solution to the frame problem for the situation calculus with knowledge and knowledge-producing actions. **Achieving epistemic goals** Given the conditional\(^7\) if $\text{bad(\text{small\_bowl})}$ then $\text{throw\_out(\text{small\_bowl})}$; else $\text{pour(\text{small\_bowl,\text{rig\_bowl})}}$; to execute in the state $s$, the agent strives to achieve a state $s^*$ so that $K\text{whether}(C(s),s^*)$ and $s \leq s^*$ holds. This can be ensured by having the interpreter insert a $\text{achieve}(K\text{whether}(\text{bad}))$ complex action before the test. This amounts to performing planning to achieve the epistemic goal\(^8\). Given a background theory $\mathcal{D}$, a sequence of actions$^{10}$ $\alpha_1,\ldots,\alpha_n$ is a plan for the goal $K\text{whether}(\varphi,s)$ iff the plan is executable.\(^9\) \[\mathcal{D} \models \text{poss}([\alpha_1,\ldots,\alpha_n],s),\] and after the plan is executed, the agent knows the truth value of $\varphi$: \[\mathcal{D} \models K\text{whether}(\varphi[s],do([\alpha_1,\ldots,\alpha_n],s^*)).\] In the omelette problem, the GOLOG interpreter would insert a single sense action (either $\text{inspect}(b)$ or $\text{sniff}(b)$ depending on the physical conditions of both the location and the agent) prior to the test for $\text{bad}(b)$. The result of executing the interpreter on $\text{do}(\delta,s,s')$ where $\delta$ is the omelette plan given earlier\(^11\). \(^7\)Actually the test for $\neg (\text{number\_eggs(large\_bowl)} = 3)$ poses a similar problem, but in this particular example it can be shown that no sensing is necessary, i.e. deduction suffices to determine the truth value of the fluent. \(^8\)We remark that here we are only allowing the agent to construct sequential plans to instantiate the $\text{achieve}$ action. In general one may want to consider producing plans with conditionals and loops. For example, one way to find out whether $\text{on\_table}(a)$ holds is to move one step; scan the surrounding; if see either the table or the block $a$, then sense whether $\text{on\_table}(a)$, else continue move one step .... See (Levesque 1996) for a discussion of the general case from a different perspective. \(^9\)The theory must include unique names axioms for actions, successor state axioms, axioms about $\text{knows}$, and axioms about the initial state. \(^10\)In this section, the notation $[\alpha_1,\ldots,\alpha_n]$ is used without formal definition wherever a single action term may occur to represent the sequential application of each action term in the list; beginning with $\alpha_1$ and ending with $\alpha_n$. \(^11\)It may be argued that we should require the agent to have a knowledge of this fact: \[\mathcal{D} \models \text{knows(poss}([\alpha_1,\ldots,\alpha_n],s),s^*).\] is a binding for $s'$ — the name of a situation resulting from a successful execution of a sequence of primitive actions that instantiate $\delta$. This sequence of primitive actions will have the sense actions spliced in at the appropriate points so that the agent will always know the truth value of $\text{BAD}(b)$ at the point where it would need to test the condition. In general\(^{12}\), the issue of what sorts of actions the agent may perform arises. Note that in order to satisfy the preconditions of the perceptual acts the agent may need to alter the state of the world. For some applications it may be necessary to ensure that the plan $A$ leave the truth value of the condition $C$ unchanged, i.e., $C(s) \equiv C(\text{do}(A, s))$. For others, this requirement is unnecessary and therefore may preclude finding a plan. A minimal requirement is that the truth value of $C(s)$ be recoverable. This is addressed by the following proposition: **Proposition 1** A sequence $\alpha_1, \ldots, \alpha_n$ of actions is a plan for $\text{Kwhether}(\varphi(s), s^*)$ iff 1. $\alpha_1$ is executable in $s$: $D \models \text{Poss}(\alpha_1, s)$. 2. There is a sentence $\varphi'(\text{do}(\alpha_1, s))$ that does not mention any state term except $\text{do}(\alpha_1, s)$ such that $D \models \varphi(s) \equiv \varphi'(\text{do}(\alpha_1, s))$, and $\alpha_2, \ldots, \alpha_n$ is a plan for $\text{Kwhether}(\varphi'(\text{do}(\alpha_1, s)), s^*)$. ### Related Work and Discussion The epistemic goals considered here are related in many ways to the information goals (Etzioni et al. 1992). For instance, the information goal “determine if the file paper.tex contains the word theorem” in (Etzioni et al. 1992) can be formalized as the following epistemic goal: $\text{Kwhether}(\text{CONTAINS(paper.tex, theorem, S_0), s})$. Etzioni et al address the problem of what the agent may change in achieving information goals. The information goals in (Etzioni et al. 1992) are defined procedurally, and are in many cases stricter than they need to be. For instance, if the agent knows that paper.tex contains the string theorem if $\text{paper.tex*}$ does, then to satisfy the goal $\text{Kwhether}(\text{CONTAINS(paper.tex, theorem, S_0), s})$, the agent is allowed to erase the file paper.tex since it knows that as long as the goal goes, it is just as well to use paper.tex*. In contrast, the corresponding information goal in (Etzioni et al. 1992) will forbid the agent to delete paper.tex. We remark that a similar problem (or feature) exists for classical planning. For instance, to satisfy the goal $\text{ON(A,B)}$, the agent is allowed to paint the blocks, and not only put the block A on the block B, but also glue them together. There are some possible solutions. One is to require that plans be justified (Fink and Yang (1993)) in the sense that they do not contain any unnecessary actions. Another is to strengthen the goals, for instance, to conjoin $\text{Kwhether(_CONTAINS(paper.tex, theorem, S_0), s})$ with: $$\forall s'.(S_0 \leq s' \leq s \supset (\forall x).\text{CONTENT(paper.tex, x, S_0})$$ which is really a goal of maintenance: Maintain the content of the file paper.tex. ### References Lespérance, Yves; Levesque, Hector; Lin, Fangzhen; Marcu, Daniel; Reiter, Ray; and Scherl, Richard 1994. A logical approach to high-level robot programming — a progress report. Appears in Control of the Physical World by Intelligent Systems, Working Notes of the 1994 AAAI Fall Symposium, New Orleans, LA.
{"Source-Url": "http://www.aaai.org/Papers/Workshops/1996/WS-96-07/WS96-07-015.pdf", "len_cl100k_base": 7528, "olmocr-version": "0.1.48", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 23245, "total-output-tokens": 8956, "length": "2e12", "weborganizer": {"__label__adult": 0.0004127025604248047, "__label__art_design": 0.000667572021484375, "__label__crime_law": 0.00057220458984375, "__label__education_jobs": 0.00189971923828125, "__label__entertainment": 0.0001405477523803711, "__label__fashion_beauty": 0.00023221969604492188, "__label__finance_business": 0.0005421638488769531, "__label__food_dining": 0.0006513595581054688, "__label__games": 0.0009021759033203124, "__label__hardware": 0.0011777877807617188, "__label__health": 0.0009436607360839844, "__label__history": 0.00039458274841308594, "__label__home_hobbies": 0.00021338462829589844, "__label__industrial": 0.0008883476257324219, "__label__literature": 0.0010013580322265625, "__label__politics": 0.00040435791015625, "__label__religion": 0.0006289482116699219, "__label__science_tech": 0.2379150390625, "__label__social_life": 0.00017774105072021484, "__label__software": 0.01042938232421875, "__label__software_dev": 0.73828125, "__label__sports_fitness": 0.0003330707550048828, "__label__transportation": 0.0008854866027832031, "__label__travel": 0.00021505355834960935}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29208, 0.01829]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29208, 0.68043]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29208, 0.84214]], "google_gemma-3-12b-it_contains_pii": [[0, 3729, false], [3729, 8101, null], [8101, 12895, null], [12895, 17963, null], [17963, 23487, null], [23487, 28255, null], [28255, 29208, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3729, true], [3729, 8101, null], [8101, 12895, null], [12895, 17963, null], [17963, 23487, null], [23487, 28255, null], [28255, 29208, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29208, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29208, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29208, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29208, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29208, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29208, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29208, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29208, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29208, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29208, null]], "pdf_page_numbers": [[0, 3729, 1], [3729, 8101, 2], [8101, 12895, 3], [12895, 17963, 4], [17963, 23487, 5], [23487, 28255, 6], [28255, 29208, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29208, 0.0]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
2826ff3d2cf0f4d20109da407210d6121ebab086
Relational Algebra Interpreter in Context of Query Languages Anshu Ranjan, Ratnesh Litoriya Abstract—Relational database systems have succeeded commercially because of their openness and sturdy theoretical groundwork. The contribution of this title “Relational Algebra Interpreter in context of query languages” is presentation of new implementation in such a way so that queries written in relational algebra can be compiled into SQL and executed on a relational database system. It takes a relational algebra statement as key, does syntactic and lexical parsing on it. In the event of an error in the syntax of the expression it will forward the error to user. If the syntax is correct the relational algebra expression is converted into a SQL statement and executed on an RDBMS. This work can serve up as a basis learning Relational Algebra for different class of users, as they will be given immediate feedbacks about their queries. Index Terms—Relational Algebra; structured query language; parser; interpreter I. INTRODUCTION Before Dr. E. F. Codd’s seminal paper on relational data banks, managing large amounts of data was a cumbersome process. Dr. Codd’s work[1] suggested storing the data in a set of relations and manipulating it using relational algebra and calculus. All major database management systems in the market today are based on the relational model specified in his paper. The relational database management systems (RDBMSs) all use structured query language (SQL) to manipulate the data. SQL itself builds on relational algebra underpinning. Relational algebra is a closed algebra which takes relations as input and produces relations as output. It defines many operators such as join, project and select on the relations that it operates on. A Relational Algebra Interpreter is presented in this paper. Implementation takes a relational algebra statement as input, does syntactic and lexical parsing on it. In the event of an error in the syntax of the expression it will forward the error to user. If the syntax is correct the relational algebra expression is converted into a SQL statement and executed on an RDBMS. JLex and JCup can be used for the purpose of lexical and syntactic analysis of the input query. target code, which can be executed directly on the machine. - Execution, which involves executing a statement which is lexically and syntactically correct and converted into a target language. - Error Handler, which is concerned with detecting and reporting lexical or syntactic errors. **Fig. 2. Phases of the interpreter** ### III. THE DESIGN #### A. Data Flow Diagram The figure below shows an elaborate view of the interpreter. Its explanation is provided thereafter. **Fig. 3: Level 1 DFD** - First, a Relational Algebra query serves as an input to the lexical analyser *i.e.* JLex. - JLex scans the particular, divides it into various lexemes, and checks whether the query matches the concerned regular expressions or not. - If an error is found in this phase, then the error is reported and the command prompt gets ready for another input. - If there is no error in the lexical phase then the query is passed to the parser tool which is JCUP. - JCUP analyses the input query and checks whether the query follows a CFL or not. - If it does not follow the grammatical rules, error is reported and the control is back to the command prompt. **Fig. 4: Use Case Diagram** **B. Use Cases** There are three use cases in the proposed software as shown in Fig. 3 and are explained below. **Use Case 1: Check the Syntax of an Input Query** - Write the query in the first text box. - Click on the button ‘Check Syntax’ to check the correctness of the query. - If the query is correct a message box with message ‘Correct Query’ is shown else a message box with message ‘Incorrect Query’ along with error location and type is displayed. **Use Case 2: Convert the input query to SQL** - First check the correctness of the query by following steps in Use Case 1. - If the query is correct, then click the ‘Convert to SQL’ button. - The converted query is displayed in the second text box. **Use Case 3: Execute the query** - First follow the steps given in Use Case 1 and 2. - Click the button ‘Execute the query’. - The output is shown in a separate window. For a better understanding of the flow of control, the flowchart diagram is shown in Fig. 4. #### C. Design for lexical analyzer JLex, a lexical analyzer generator in Java, is proposed to be used for implementing phase of lexical analysis in the interpreter. The first phase of compilation is lexical analysis - the decomposition of the input into tokens. A token is usually described by an integer representing the kind of token, possibly together with an attribute, representing the value of the token. The lexical analyzer copes with text that may not be lexically valid by producing an error message. Figure 5: Flowchart A JLex input file is organized into three sections, separated by double-percent directives ("%%"). A proper JLex specification has the following format. [II] user code %% JLex directives %% regular expression rules In the second section, various state names and macros could be declared viz. keywords like project, rename, select, join, intersect, minus, times etc., symbol sets like digit(1 to 9), letters(a-z,A-Z), comparison operators (<,>,=), whitespace(/n,/t,/b), bar brackets[,] bar brackets([],), semicolon(;) comma(,), quotes(') etc… In the third section, the action sequence in java code will be specified when the lexical analyzer encounters a particular token. As for example if a token matching the keyword ‘project’ is encountered, then the symbol corresponding to the state which is 3 is returned. Similarly, all the other tokens are duly taken care of. D. Design for parser CUP, a system for generating LALR parsers from simple specifications written in Java, is proposed to be used for parsing. Using CUP involves creating a simple specifications based on the grammar for which a parser is needed, along with construction of a scanner capable of breaking characters up into meaningful tokens (such as keywords, numbers, and special symbols), which could be done using JLex in my case. The specification contains three main parts. The first part provides preliminary and miscellaneous declarations to specify how the parser is to be generated, and supply parts of the runtime code. The second part of the specification declares terminals and non terminals, and associates object classes with each. Here, WE will illustrate the grammar of RA to be used. [III] Query ::= Expr SEMI | error; Expr ::= ProjExpr | SelectExpr | RenameExpr | UnionExpr | MinusExpr | IntersectExpr | JoinExpr | TimesExpr | STR; ProjExpr ::= PROJECT [ AttrList ] ( Expr ); RenameExpr ::= RENAME [ AttrList ] ( Expr ); AttrList ::= STR | AttrList , STR; UnionExpr ::= ( Expr UNION Expr ); MinusExpr ::= ( Expr MINUS Expr ); IntersectExpr ::= ( Expr INTERSECT Expr ); JoinExpr ::= ( Expr JOIN Expr ); TimesExpr ::= ( Expr TIMES Expr ); SelectExpr ::= SELECT [Condition ] ( Expr ); Condition ::= SimpleCondition | SimpleCondition AND Condition; SimpleCondition ::= Operand COMP Operand; Operand ::= STR | ' STR ' | NUMBER; As, it can be seen above, the terminals were AND, PROJECT, STR, QUOTE, RENAME, TIMES, SELECT, BAROPEN, BARCLOSE, JOIN, COMMA, SEMI, MINUS, BRACOPEN, INTERSECT, BRACCLOSE, NUMBER, COMP, UNION and the non-terminals were Query, Expr, ProjExpr, Operand, SimpleCondition, AttrList, SelectExpr, Condition, RenameExpr, UnionExpr, MinusExpr, IntersectExpr, JoinExpr, TimesExpr, which constitute the second part of the specification. The final part of the specification contains the grammar shown above. E. Algorithm for converter To convert a Relational Algebra Query into a Structured Query Language Query, all the keywords in RA are needed to be separately taken care of. Now, each of the keywords will follow the steps as shown below. 1) Project:- a) Replace ‘project’ with ‘select’. b) Remove the bar brackets enclosing the attributes. c) Add the word ‘from’ after the attributes. d) If before the next ‘project’ and ‘rename’ token a tokens of the order (“,” “<string>”, “,”) is found, then remove them from the original string and append the “<string>” after ‘from’. Otherwise don’t take any action and exit. 2) Rename a) If the ‘rename’ token is not followed by ‘project’ token before the end of rename statement, then replace it with ‘project’ token and follow the procedures as those of ‘project’ and exit. Otherwise, do the following. b) Insert next to all the occurrences of attributes mentioned after “project” token the corresponding attributes after “rename” token. c) Remove the tokens of “rename” and all the attributes after it. 3) Select a) Replace ‘select’ token with ‘where’. b) Remove the bar brackets enclosing the condition after that. 4) Join Replace the ‘join’ token with ‘) natural join (’. This will facilitate execution of natural join command in Oracle 10g. 5) Minus or Times or Union or Intersect Replace the token with “) <current token> (“. IV. IMPLEMENTATION Class Diagram As shown below in Fig. 6, the project includes five classes, each of which have been briefly described below. User Interface: RAI class is responsible for creating a user interface using Java Swing. It consists of two text boxes, one for accepting input from user and other for showing the converted SQL query. The user can check the syntax, convert input query from RA to SQL, execute queries on a database and empty the contents of the text boxes using concerned buttons of the interface. Syntax Analyzer: JLex and JCup are used to make the parser. The design of the input to these has already been explained in the design phase. Converter: The class RAtoSQL converts the input RA query to SQL using various methods of the class ToolsforConversion. Execute: This class is responsible for connecting the application with a database and execute the SQL query on it. USER INTERFACE We have used JFrame for constructing the Interface. The main class inherits from the class JFrame. At the beginning the constructor of parent class is called. As shown in the figure 6, following are the main components of the user interface: i) Text Boxes: The text box on the top is to accept an input query from the user in RA. The second text box would contain the converted SQL query, once the user supplies a correct input and clicks on the button ‘Convert to SQL’. ii) Buttons: Four buttons are used in the interface namely Check Syntax, Convert to SQL, Execute Query and Reset. - Check Syntax: Checks whether the input query is lexically and syntactically correct or not. If the query is correct then a message box showing the message “Correct Query” is displayed. If the query is incorrect, then a message showing the message “Incorrect Query”, along with the location and type of error is displayed. - Convert to SQL: This button converts a correct query to SQL and shows the output in the second text box. - Execute Query: Executes the SQL query on a database. The output is shown in another window. - Reset: Clears the text in the two text boxes. Following are the methods used in the module along with their brief descriptions: i) main (): This method first makes certain adjustments in the graphs of the interface. Then it calls the constructor RAI (). ii) RAI (): This constructor calls the constructor of the parent class. Then it calls the method initializeComponent (). Ultimately it sets the value true to the setVisible property of the current object. iii) initializeComponent (): This method is responsible for creating the whole of user interface. It creates all the text boxes, buttons and labels with appropriate captions. It also adds the ActionListener property to the buttons so that the button would respond on being clicked. iv) Button1_actionPerformed (ActionEvent e): This method is called once the button with caption “Check Syntax” is clicked. It calls the method lexcheck (). v) Button2_actionPerformed (ActionEvent e): This method is called once the button with caption “Convert to SQL” is clicked. It calls the method convert () of class converter passing the input query in first text box as a parameter. vi) Button3_actionPerformed (ActionEvent e): This method is called once the button with caption “Execute Query” is clicked. It calls the main method of class SimpleOraJava passing the input query in first text box as a parameter. vii) Button4_actionPerformed (ActionEvent e): This method is called once the button with caption “Execute Query” is clicked. It clears the text of the two text boxes. viii) lexcheck (): This method collects the input query in a variable and send it for lexical and syntactic analysis by calling the methods of classes Yylex and parser. ix) RAI (int): This constructor contains no code. It is used by the error detecting module of the project for the purpose of declaring a reference variable of this class. A. Input to Lexical Analyzer We have used JLex for the purpose of lexical analysis of the input query. JLex is a lexical analyzer generator (also known as scanner generator) for Java, written in Java. The first phase of compilation is lexical analysis - the decomposition of the input into tokens. A token is described by an integer representing the kind of token, possibly together with an attribute, representing the value of the token. The lexical analysis copes with text that may not be lexically valid by producing an error message. A JLex input file is organized into three sections, separated by double-percent directives (“%%”). A proper JLex specification has the following format. User code %% JLex directives %% regular expression rules The “%%” directives distinguish sections of the input file and must be placed at the beginning of their line. The remainder of the line containing the “%%” directives may be discarded and should not be used to house additional declarations or code. The user code section - the first section of the specification file - is copied directly into the resulting output file. This area of the specification provides space for the implementation of utility classes or return types. The JLex directives section is the second part of the input file. Here, macros definitions are given and state names are declared. The third section contains the rules of lexical analysis, each of which consists of three parts: an optional state list, a regular expression, and an action. Next, WE will describe my input to JLex. **User code** Only one line was written in this part. `import java_cup.runtime.Symbol; /* so that the there is no problem in running the code which We will write in the section of regular expression rules */` **JLex directives** In this part, the following macros were declared: 1. **digit** – It contains all the digits from 0 to 9. 2. **COMP** – It contains the three operators ‘<‘, ‘>’ and ‘=’. 3. **NUMBER** – It contains all the possible combinations of digits 0 to 9. 4. **BAROPEN** – It contains the character ‘[‘. 5. **BARCLOSE** – It contains the character ‘]’. 6. **whitespace** – It contains the escape sequences ‘\t\n\t\f’ and a space ‘ ’. 7. **letters** – It contains all letters in both caps, the digits and the symbols ‘_’, ‘.’, ‘*’ and ‘,’. 8. **PROJECT** – It contains the keyword ‘project’. 9. **SELECT** – It contains the keyword ‘select’. 10. **SEMI** – It contains the symbol ‘;’. 11. **BRACOPEN** – It contains the symbol ‘(‘. 12. **BRACCLOSE** – It contains the symbol ‘)’. 13. **COMMA** – It contains the symbol ‘,’. 14. **RENAMe** – It contains the keyword ‘rename’. 15. **UNION** – It contains the symbol ‘union’. 16. **INTERSECT** – It contains the keyword ‘intersect’. 17. **MINUS** – It contains the keyword ‘minus’. 18. **JOIN** – It contains the keyword ‘join’. 19. **TIMES** – It contains the keyword ‘times’. 20. **STR** – It contains all the possible sequences of letters described in the macro ‘letters’. 21. **QUOTE** – It contains the symbols of quotations. 22. **AND** – It contains the keyword ‘and’. **B. Regular Expression Rules** This section tells the analyzer that what action it should take once it encounters a particular token in the input query. If it counts one of the macros: MINUS, JOIN, TIMES, SELECT, COMMA, BAROPEN, BARCLOSE, PROJECT, BRACOPEN, BRACCLOSE, SEMI, or a combination of macros COMP and letters, it returns a number corresponding to each of them as specified in the symbol table which is made by the analyzer itself. Example of the specification follows: ```{AND} {return new Symbol(sym.AND);} {SEMI} {return new Symbol(sym.SEMI);} {COMP} {return new Symbol(sym.COMP);}``` Thus, the input file was saved with .lex extension. Now, it was time to make the input file to JCup. **V. INPUT TO PARSER** We have used a java tool JCup for the purpose of parsing. **JCup** i.e. Java Based Constructor of Useful Parsers (CUP for short) is a system for generating LALR (Look Ahead Left to Write) parsers from simple specifications. Using CUP involves creating a simple specifications based on the grammar for which a parser is needed, along with construction of a scanner capable of breaking characters up into meaningful tokens (such as keywords, numbers, and special symbols). A CUP specification has three main parts. The first part provides preliminary and miscellaneous declarations to specify how the parser is to be generated, and supply parts of the runtime code. In this case we indicate that the java_cup.runtime and import javax.swing.* classes should be imported, and then supply a small bit of initialization code, and some code for invoking the scanner to retrieve the next input token. The second part of the specification declares terminals and non terminals, and associates object classes with each. In this case, we declare our terminals as being represented at runtime by two object types: token and int_token (which are supplied as part of the CUP runtime system), while various non terminals are represented by objects of types symbol and int_token (again supplied from the runtime system). The final part of the specification contains the grammar. **A. Initialization Code** Initially, some initialization code is added for the purpose of error detection and reporting. This is done by enclosing the code in parser code declaration as it allows methods and variable to be placed directly within the generated parser class. Following methods and global variables are placed in the code: - **query**: This variable stores the input query. - **errortype**: This variable stores the number corresponding to the type of error encountered. Following are the error numbers and types: <table> <thead> <tr> <th>Error Number</th> <th>Type</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>Expecting open bar bracket '{'</td> </tr> <tr> <td>2</td> <td>Bar bracket not closed</td> </tr> <tr> <td>3</td> <td>Expecting open bracket '('</td> </tr> <tr> <td>4</td> <td>Invalid syntax of 'project'</td> </tr> <tr> <td>5</td> <td>Expecting a bracket or semicolon</td> </tr> <tr> <td>6</td> <td>Invalid syntax of 'rename'</td> </tr> <tr> <td>7</td> <td>Invalid syntax of 'union'</td> </tr> <tr> <td>8</td> <td>Invalid syntax of 'minus'</td> </tr> <tr> <td>9</td> <td>Invalid syntax of 'intersect'</td> </tr> <tr> <td>10</td> <td>Invalid syntax of 'join'</td> </tr> <tr> <td>11</td> <td>Invalid syntax of 'times'</td> </tr> <tr> <td>12</td> <td>Invalid syntax of 'select'</td> </tr> </tbody> </table> - **ci**: This variable stores the index value of the currently scanning token of the input query. - **thequery**: This method first declares a reference object variable of the class RAI. After that, it receives the input query of the user using the static variable of the class ‘query’ and subsequently stores in the global variable of this module called ‘query’. - **setci(char)**: This method sets the variable ci to the index after the first occurrence of the character passed as the parameter in the input query after the current ci. vi) setci (string): This method sets the variable ci to the index after the first occurrence of the string passed as the parameter in the input query after the current ci. vii) setzero: It sets the value of global variables ci and errortype to zero. viii) settype (int): It assigns the type of error which is passed as a parameter to the variable errortype. ix) displayerror(): It identifies the type of error encountered and displays it on a message box along with the query. B. Terminals and Nonterminals Following terminals and non terminal were declared: Terminals: AND, PROJECT, STR, QUOTE, RENAME, TIMES, SELECT, BAROPEN, BARCLOSE, JOIN, COMMA, SEMI, MINUS, BARCLOSE, NUMBER, COMP, UNION Note that the terminals were declared using large caps while the non terminals were declare in small caps. C. The Grammar The grammar specification is same as described in the previous chapter and as shown below: Query ::= Expr SEMI | error; Expr ::= ProjExpr | SelectExpr | RenameExpr | UnionExpr | MinusExpr | IntersectExpr | JoinExpr | TimesExpr | STR; ProjExpr ::= PROJECT BAROPEN AttrList BARCLOSE; RenameExpr ::= RENAME BAROPEN AttrList BARCLOSE; BARCLOSE BARACOPEN Expr BARCLOSE; AttrList ::= STR | AttrList COMMA STR; UnionExpr ::= BARCLOSE BARACOPEN Expr UNION Expr BARCLOSE; MinusExpr ::= BARCLOSE BARACOPEN Expr MINUS Expr BARCLOSE; IntersectExpr ::= BARCLOSE BARACOPEN Expr INTERSECT Expr BARCLOSE; JoinExpr ::= BARCLOSE BARACOPEN Expr JOIN Expr BARCLOSE; TimesExpr ::= BARCLOSE BARACOPEN Expr TIMES Expr BARCLOSE; SelectExpr ::= SELECT BAROPEN Condition BARCLOSE; Condition ::= SimpleCondition | SimpleCondition AND Condition; SimpleCondition ::= Operand COMP Operand; Operand ::= STR | QUOTE STR QUOTE | NUMBER; Still this is not the exact input in this part. Many codes were added in between the grammar specification for identification and reporting of errors. For example if the user forgets to type opening bar bracket ['['] after the keyword 'project', it will encounter the following code: System.out.println("current"="+parser.ci); parser.settype(2); AttrList { parser.setci("["]); System.out.println("current"="+parser.ci); BARCLOSE BRACOPEN { parser.setci(""); parser.settype(3); Expr { parser.setci(""); System.out.println("current"="+parser.ci); BARCLOSE BRACOPEN { parser.setci(""); parser.settype(4); Expr { parser.setci(""); System.out.println("current"="+parser.ci); parser.settype(5);} BARCLOSE ; ---------------*/ As it can be seen that as soon as the parser sees the keyword 'project', it sets the error type to 1 since if at this stage there is an error in parsing, it means that there is a missing opening bar bracket. Hence as soon as the error types is saved and the parsing is terminated at the following statement: Query ::= { parser.thequery(); }Expr SEMI {: | OptionPane.showMessageDialog (null,"Correct Query!"); | parser.setzero(); | error | :System.out.println(parser.ci);parser.displayerror(); | parser.setzero(); ; :; | */ Here, we see that the flow of control of the parser is at the error part. Here it calls the displayerror() method which has been explained previously. Hence the error message box appears along with the error location and type. Ultimately all the global variable are set to zero so that the error code would work properly the next time. Similarly, codes have been added to each line of the grammar to specify error type and set the value of ci. VI. CONCLUSION AND FUTURE EXTENSIONS A. Conclusion During the course of making this project, the gain in terms of experience and knowledge is exemplary and should be highly useful to me in the long run. The gamut of learning covers UML diagrams, JAVA programming, designing algorithms and research works to name a few. We especially enjoyed exploiting the vast number of classes in java API while coding for conversion of query from RA to SQL. Though WEhave tried to make my system as user-friendly and reliable as possible, it has certain limitations as explained in the following section. B. Limitations i) ‘Rename’ only supports renaming of attributes; it does not support renaming of the input relation. ii) The inner join operator is not supported. iii) It does not support any function other than the primitive operations of relational algebra like aggregate function viz. count etc. C. Relevance in the present scenario Databases are the focus of most introductory courses on database management systems. The formal relational query languages like relational algebra are therefore an important part of the curriculum. As a student, it is difficult to know whether your queries expressed on paper in the formal languages are correct. As an instructor, it is often difficult to grade creative queries, especially those that have not been verified by an educational tool. The goal of the Relational Algebra Interpreter is to provide a mechanism by which the students can explore the formal relational query languages, getting immediate feedback by seeing the answers to the posed query. Consequently, the grading process is usually eased since the students can submit verified answers on homework assignments. D. Future Extensions Though the project is complete in itself and satisfies all the objectives encompassed by the scope, there are many enhancements which can be made in it. Some of these plausible upgradations are listed below: i) Connecting with the database: The software will allow the user to choose the database on which the query is to be executed. This can be done by changing the URL while connecting with a database. In this way, the query could be executed on access, mySQL or any other database. ii) Taking RA file as input: The software will take a file containing a series of RA queries and execute them sequentially. The output can be shown on a separate file. iii) Create a database: The user can create his own database in which he makes various relations and perform relational algebra operations on them. iv) Delete or modify relations: The user can edit the relations present in the database. REFERENCES [16] Head First Java by Kathy Sierra [17] The Complete Reference Java2 by Herbert Schildt [19] Database System Concepts by Silberschatz, Korth, Sudarshan
{"Source-Url": "http://www.ijcte.org/papers/276-D014.pdf", "len_cl100k_base": 5884, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 20820, "total-output-tokens": 6751, "length": "2e12", "weborganizer": {"__label__adult": 0.00028514862060546875, "__label__art_design": 0.0002548694610595703, "__label__crime_law": 0.0002574920654296875, "__label__education_jobs": 0.002300262451171875, "__label__entertainment": 5.507469177246094e-05, "__label__fashion_beauty": 0.0001207590103149414, "__label__finance_business": 0.00016546249389648438, "__label__food_dining": 0.0003616809844970703, "__label__games": 0.00040984153747558594, "__label__hardware": 0.0006213188171386719, "__label__health": 0.00037980079650878906, "__label__history": 0.00017821788787841797, "__label__home_hobbies": 8.982419967651367e-05, "__label__industrial": 0.0003781318664550781, "__label__literature": 0.0002624988555908203, "__label__politics": 0.00016427040100097656, "__label__religion": 0.00041866302490234375, "__label__science_tech": 0.01342010498046875, "__label__social_life": 0.00010114908218383788, "__label__software": 0.006412506103515625, "__label__software_dev": 0.97265625, "__label__sports_fitness": 0.0002211332321166992, "__label__transportation": 0.00038814544677734375, "__label__travel": 0.0001634359359741211}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27644, 0.01261]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27644, 0.66269]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27644, 0.84326]], "google_gemma-3-12b-it_contains_pii": [[0, 2245, false], [2245, 4867, null], [4867, 8834, null], [8834, 14114, null], [14114, 20207, null], [20207, 24967, null], [24967, 27644, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2245, true], [2245, 4867, null], [4867, 8834, null], [8834, 14114, null], [14114, 20207, null], [20207, 24967, null], [24967, 27644, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27644, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27644, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27644, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27644, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27644, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27644, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27644, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27644, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27644, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27644, null]], "pdf_page_numbers": [[0, 2245, 1], [2245, 4867, 2], [4867, 8834, 3], [8834, 14114, 4], [14114, 20207, 5], [20207, 24967, 6], [24967, 27644, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27644, 0.05426]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
b48b591b8c1ae75c6e7f8ccd55d8d5ad71b24b6d
MONITORING COOPERATIVE BUSINESS CONTRACTS IN AN INSTITUTIONAL ENVIRONMENT Henrique Lopes Cardoso, Eugénio Oliveira LIACC, DEI / Faculdade de Engenharia, Universidade do Porto, R. Dr. Roberto Frias, 4200-465 Porto, Portugal hlc@fe.up.pt, eco@fe.up.pt Keywords: Electronic contract monitoring, Contractual obligations, Deadlines, Rules Abstract: The automation of B2B processes is currently a hot research topic. In particular, multi-agent systems have been used to address this arena, where agents can represent enterprises in an interaction environment, automating tasks such as contract negotiation and enactment. Contract monitoring tools are becoming more important as the level of automation of business relationships increase. When business is seen as a joint activity that aims at pursuing a common goal, the successful execution of the contract benefits all involved parties, and thus each of them should try to facilitate the compliance of their partners. Taking into account these concerns and inspecting international legislation over trade procedures, in this paper we present an approach to model contractual obligations: obligations are directed from bearers to counterparties and have flexible deadlines. We formalize the semantics of such obligations using temporal logic, and we provide rules that allow for monitoring them. The proposed implementation is based on a rule-based forward chaining production system. 1 INTRODUCTION Technological support for B2B is increasing. A line of research consists on automating (part of) the process of creation and execution of e-contracts, through multi-agent systems (MAS) technology: software agents can be used as enterprise delegates, automating tasks such as contract negotiation and enactment. We have formalized and developed an Electronic Institution platform motivated by the need to develop services that assist agents (representing real-world entities) when interacting with the aim of establishing business relationships. Contracts make the commitments of parties explicit, and an institutional environment seeks to monitor and enforce those contracts. In the B2B world (particularly in cases where a business relationship is strategic) it is often the case that parties cooperate in contract enactment. A contract specifies obligations between contractual parties, and provides legal options handling non-compliance cases. When a business relationship is seen as a joint activity that aims at pursuing a common goal, the successful execution of the contract benefits all involved parties, and thus each of them should try to facilitate the compliance of their partners. In an agent-based environment for contract monitoring, each party is represented in the system by a software agent. Automated monitoring tools should take into account that agents may be cooperative enough to allow, in some circumstances, deviations from their counterparties. This is because group success also benefits each agent’s private goals, which are not limited to the ongoing business relationship, but also concern future opportunities that may arise. Our approach to contract monitoring is based on real-world evidence from business contract legislations, namely the United Nations Convention on Contracts for the International Sale of Goods – CISG (UNCITRAL, 1980), which denotes a flexible and even cooperative facet of trade contracts. In this paper we present the model of contractual obligations that we advocate – directed obligations with liveline and deadline. We formalize the semantics of these obligations and provide an implementation concerning the development of an institutional contract monitoring environment. The paper is structured as follows. In section 2 we introduce our approach to model contractual obli- obligations, and formalize their semantics using temporal logic. In section 3 we translate this formalization to a rule-based approach, and we provide an implementation using a rule engine. An example is given that shows how a model of flexible deadlines can be exploited in a contract. Section 4 concludes. 2 MODEL FOR CONTRACTUAL OBLIGATIONS When reaching a business agreement, parties sign a contract including norms that describe their commitments. In our approach, an institutional normative environment provides a contract monitoring service, including contractual norms and monitoring rules. Active monitoring requires recording contract-related events in a so-called normative state, including: - institutional facts: institutionally recognized facts that are brought about by agents - obligations: what agents should do - fulfillments: obligations that are fulfilled - violations: obligations that are not fulfilled Elements other than institutional facts are environment events, asserted in the process of norm activation and monitoring. Obligations describe what agents should do, and may be fulfilled or not. We consider different violation states, as explained later. In general terms, a norm prescribes, given a certain state of affairs, what an agent is obliged to do: situation → prescription. The situation comprises any combination of events that have occurred for a given contract. Consider, for instance, a simple purchase contract. We may have a norm indicating that when the seller issues the invoice (modeled as an institutional fact), the buyer is obliged to pay. Furthermore, we may state that once the payment has been fulfilled, the seller is obliged to send the receipt. We may also define norms based on violations: if the buyer does not pay within due date, an interest rate will be applied. 2.1 Directed Obligations with Time Windows When specifying norms in contracts, deadline handling is central to define the semantics of contractual obligations. We have developed a model for these obligations inspired in the CISG convention (UNICTRAL, 1980). Consider the following excerpts: Article 48: (1) [...] the seller may, even after the date for delivery, remedy at his own expense any failure to perform his obligations, if he can do so without unreasonable delay [...] (2) If the seller requests the buyer to make known whether he will accept performance and the buyer does not comply with the request within a reasonable time, the seller may perform within the time indicated in his request. [...]"" We define the following terms to signal the occurrence of specific obligation states: - obligation $O_{b,c}(f,l,d)$: the obligation was prescribed by a norm, and is therefore active - deadline violation $DViol_{b,c}(f,l,d)$: the fact being obliged has been brought ahead of time - liveline violation $LViol_{b,c}(f,l,d)$: the fact being obliged should have been brought already - fulfillment $Fulf_{b,c}(f,l,d)$: the obligation was fulfilled - violation $Viol_{b,c}(f,l,d)$: the obligation was violated and cannot be fulfilled anymore We introduce the element $Den_{b,c}(f,l,d)$, which is a denounce from agent $c$ towards agent $b$ regarding the failure of the latter to comply with his obligation to bring about $f$ before $d$. We consider the achievement of facts to be common knowledge: a party may only denounce the non-fulfillment of an obligation while that obligation is not fulfilled yet. A deadline is meant to indicate when the counterparty is authorized to react to the non-fulfillment of an obligation directed to him. A possible reaction is to declare the obligation as violated. However, the counterparty might want to concede an extended deadline. We emphasize the case for a deadline violation (as opposed to obligation violation). This comprises a flexible approach to handling non-ideal situations: each deadline violation is different, as each may have a different impact on the ongoing business, and occurs between a specific pair of agents with a unique trust relationship. Figure 1 illustrates the lifecycle of a directed obligation with liveline and deadline: $O_{b,c}(f,l,d)$. When $l$ arises the obligation becomes pending, since only then it may be fulfilled according to the terms of the contract. In case of an anticipated achievement of $f$ (a liveline violation), we only need $l$ to consider the obligation as fulfilled. However, this does not prevent the counterparty from reacting to this early achievement (although not by denouncing it). A deadline violation can be resolved in two ways: successfully by the counterparty from reacting to this early achievement of $f$, or unsuccessfully by a denounce. ### 2.2 Formalization We define the following terms to signal the occurrence of specific obligation states: - obligation $O_{b,c}(f,l,d)$: the obligation was prescribed by a norm, and is therefore active - liveline violation $LViol_{b,c}(f,l,d)$: the fact being obliged has been brought ahead of time - deadline violation $DViol_{b,c}(f,l,d)$: the fact being obliged should have been brought already - fulfillment $Fulf_{b,c}(f,l,d)$: the obligation was fulfilled - violation $Viol_{b,c}(f,l,d)$: the obligation was violated and cannot be fulfilled anymore We consider the failure of the latter to comply with his obligation while that obligation is not yet violated, but is in a state where the obligation can be resolved in two ways: successfully by the counterparty from reacting to this early achievement of $f$, or unsuccessfully by a denounce. When the deadline is not yet the case when deadline $d$ arises, the obligation is not yet violated, but is in a state where the deadline violation is not yet violated, but is in a state where the deadline violation is not mandatory. Borrowing from temporal logic, the following relations express the semantics of our obligations: - $O_{b,c}(f,l,d) \land (f \rightarrow l) \models LViol_{b,c}(f,l,d)$: Liveline violation occurs when the obliged fact is brought about before the liveline. - $O_{b,c}(f,l,d) \land l \land (f \rightarrow d) \models Fulf_{b,c}(f,l,d)$: Fulfillment occurs after the liveline and when the obligation is brought about before the deadline. - $O_{b,c}(f,l,d) \land (d \rightarrow f) \models DViol_{b,c}(f,l,d)$: Deadline violation occurs when the deadline comes before the obligation fact. - $DViol_{b,c}(f,l,d) \land (f \rightarrow B(l \land d)) \models Fulf_{b,c}(f,l,d)$: Fulfillment occurs after a deadline violation if the obligation fact is obtained before a denounce. - $DViol_{b,c}(f,l,d) \land (Den_{c,b}(f,l,d)) \models Viol_{b,c}(f,l,d)$: Violation occurs after a deadline violation if a denounce is made before the obligation fact occurs. We have two kinds of temporal violations: liveline violations – $LViol_{b,c}(f,l,d)$ – and deadline violations – $DViol_{b,c}(f,l,d)$. Note that when the deadline is reached we set the obligation to have a violated deadline, but not to be violated in itself. Only a denounce establishes such a state. ### 3 Automated Monitoring **Rule-Based Engine** The logical relationships expressed above provide us a formalism to define directed obligations with livelines and deadlines. In order to develop appropriate tools to monitor contracts at run-time, we ground this with... semintries into a reasoning engine capable of responding to events as soon as they occur. A natural choice we have made before (Lopes Cardoso and Oliveira, 2009) is the use of a rule-based forward-chaining production system. These systems are composed of a knowledge base (rules), a working memory (facts) and an inference engine that matches rules’ conditions with facts, producing changes in working memory. Figure 2 instantiates these concepts to our purposes. The following (forward-chaining) rules can be defined to implement the semantics of directed obligations with deadlines and deadlines: - \( O_{bc}(f,l,d) \land f \land \neg l \rightarrow LViol_{bc}(f,l,d) \) - \( O_{bc}(f,l,d) \land l \land f \land \neg d \rightarrow Fulf_{bc}(f,l,d) \) - \( O_{bc}(f,l,d) \land d \land \neg f \rightarrow DViol_{bc}(f,l,d) \) - \( DViol_{bc}(f,l,d) \land f \land \neg Den_{bc}(f,l,d) \rightarrow Fulf_{bc}(f,l,d) \) - \( DViol_{bc}(f,l,d) \land Den_{bc}(f,l,d) \land \neg f \rightarrow Viol_{bc}(f,l,d) \) Each relation of the form \((e_1, B e_2)\) was translated into a conjunction \(e_1 \land \neg e_2\). This allows us to detect the moment at which the \(B\) relation holds, and consequently to reason about its consequences. We assume an immediate assertion of facts and temporal references when they come into being. Furthermore, rules are expected to be evaluated in every working memory update (e.g. right after a fact is asserted), in order to produce the indicated conclusions. These conclusions are added to the normative state. ### 3.1 Reasoning with Time In business contracts deadlines are usually dependent on the fulfillment dates of other obligations. Instead of having fixed (absolute) dates, these may at times be relative, calculated according to other events. CISG (UNCITRAL, 1980) expresses this by saying that dates can be determinable from the contract: **Article 33:** The seller must deliver the goods: (a) if a date is fixed by or determinable from the contract, on that date; (b) if a period of time is fixed by or determinable from the contract, at any time within that period [...] **Article 59:** The buyer must pay the price on the date fixed by or determinable from the contract [...] It is therefore useful to timestamp each event. Therefore, \(Fulf_{bc}(f,l,d)\) will indicate a fulfillment at time point \(t\); similarly for \(Viol_{bc}(f,l,d)\). Since a fact itself has now a timestamp attribute, for ease of reading we will write fact \(f\) achieved at time point \(t\) as \(Fact(f)^t\). A denounce will be written \(Den_{bc}(f,l,d)^t\). Norms will be based on these elements and time references in order to prescribe obligations with relative deadlines. For example, \(Fulf_{bc}(Deliver(x), \neg \alpha)^1 \rightarrow O_{c,b}(Pay(price), t, t + 10)\) would mean that once agent \(b\) has fulfilled his obligation to deliver \(x\) to agent \(c\), the latter is obliged to pay to the former within a period of 10 time units. Having timestamps also allows us to define rules with a closer reading to the LTL before operator: - \(O_{bc}(f,l,d) \land Fact(f)^1 \land t < l \rightarrow LViol_{bc}(f,l,d)\) - \(O_{bc}(f,l,d) \land \neg Fact(f)^1 \land t < d \rightarrow Fulf_{bc}(f,l,d)^t\) - \(O_{bc}(f,l,d) \land \neg Den_{bc}(f,l,d) \land Fact(f)^1 \land t < d \rightarrow DViol_{bc}(f,l,d)^t\) - \(DViol_{bc}(f,l,d) \land \neg Den_{bc}(f,l,d)^t \land u < t \rightarrow Fulf_{bc}(f,l,d)^t\) - \(DViol_{bc}(f,l,d) \land Den_{bc}(f,l,d)^t \land \neg Fact(f)^1 \land t < u \rightarrow Viol_{bc}(f,l,d)^t\) This kind of approach has the benefit of relaxing the rule evaluation policy, because we are checking the timestamps of each event (see also (Lopes Cardoso and Oliveira, 2009)). ### 3.2 Implementation with Jess We have chosen Jess\(^3\) (Friedman-Hill, 2003) to implement our norm monitoring system. Jess is a very efficient rule engine based on the Rete algorithm for pattern matching. We start by defining appropriate templates (through deftemplate constructs) for each type of element in working memory. Jess facts follow a frame-like notation, in which each fact has associated slots to be filled in. We have: ```plaintext (deftemplate ifact (multislot fact) (slot when) ) (deftemplate time (slot when) ) ``` \(^3\)The code presented in this section includes some simplifications in order to make it more simple to understand. For simplification, we included a reference to the obligation inside other templates. The time template is used to assert the occurrence of time events (associated with livelines and deadlines), which is done by scheduling alerts using a system clock. Implementing the monitoring rules in Jess is straightforward. A Jess rule is written in the form LHS =⇒ RHS, where LHS includes fact patterns that will be matched against facts in working memory. The RHS indicates actions to execute (such as asserting new facts) when the rule is fired. The following rules (defined with defrule constructs) translate directly from the monitoring rules shown above (identifiers starting with a question mark are variables). (defrule detect-liveline-violation ?obl <- (obligation (fact $?f) (liveline ?l)) (ifact (fact $?f) (when < ?l)) => (assert (liveline-violation (obl ?obl)))) (defrule detect-fulfillment ?obl <- (obligation (fact $?f) (liveline ?l) (deadline ?d)) (time (when ?l)) (ifact (fact $?f) (when ?t)) (test (< ?t ?d)) => (assert (fulfillment (obl ?obl) (when ?t)))) (defrule detect-deadline-violation ?obl <- (obligation (fact $?f) (deadline ?d)) (time (when ?d)) (not (ifact (fact $?f) (when < ?d))) => (assert (deadline-violation (obl ?obl)))) (defrule detect-belated-fulfillment (deadline-violation (obl ?obl)) ?obl <- (obligation (fact $?f)) (ifact (fact $?f) (when ?t)) (not (denounce (obl ?obl) (when <= ?t))) => (assert (fulfillment (obl ?obl) (when ?t)))) (defrule detect-violation (deadline-violation (obl ?obl)) ?obl <- (obligation (fact $?f)) (denounce (obl ?obl) (when ?u)) (not (ifact (fact $?f) (when <= ?u))) => (assert (violation (obl ?obl) (when ?u)))) These rules enable us to monitor the compliance of agents with prescribed obligations. As explained at the beginning of section 2, norms have a rule-like format and are based on contract-related events, such as those obtained by firing monitoring rules. Not surprisingly, norms too are implemented in Jess as rules. What makes them norms is that typically they will be used to obtain new obligations (these will be asserted in the RHS of norms), which will then be monitored. The normative environment’s monitoring capabilities may be used as a tool for alerting agents when certain contract-related events occur. Further rules may be defined with such a purpose. The RHS of Jess rules may include function calls that implement the desired level of responsiveness of the normative environment in which notifications are concerned. ### 3.3 Example In this section we show a simple example where the concept of flexible deadlines is exploited in an electronically supervised business relationship. We have a contract between two agents, say B and S, wherein S commits to supply, whenever ordered, good X for 7.5 per unit. The norms below (implemented as Jess rules) define the contractual relationship and are included in the institutional normative environment for monitoring purposes. Agent S is supposed to deliver the ordered goods between 3 to 5 days after the order (norm n1), and agent B shall pay within 30 days (norm n2). Furthermore, if agent B does not pay in due time, he will incur in a penalty consisting of an obligation to pay an extra 10% on the order total (norm n3). Finally, if agent S violates his obligation to deliver, the contract will be canceled (n4). (defrule n1 (ifact (fact order item X quantity ?q) (when ?w)) => (assert (obligation (bearer S) (counterparty B) (fact delivery X qt ?q) (liveline (+ ?w 3)) (deadline (+ ?w 5))))) (defrule n2 (fulfillment (obl ?obl) (when ?w)) ?obl <- (obligation (fact delivery X qt ?q)) => (assert Note that the interest applied on payments is automatic once a deadline violation is detected (norm n3). On the other hand, a contract cancellation (norm n4) requires that agent B denounces the inability of agent S to fulfill the delivery. It is therefore up to agent B whether to wait further and accept a delayed delivery or not. If the agreed upon contract conditions are important enough, allowing a counterparty deviation (and hence taking a cooperative attitude regarding the compliance of the contract) may be a good decision. Different kinds of situations may be easily modeled using this kind of norms. Moreover, using flexible deadlines also ensures a degree of freedom for agents to make decisions in the execution phase of contracts, which is important for dealing with business uncertainty. 4 CONCLUSIONS In B2B relationships contracts specify, through obligations, the interactions between different partners, and provide legal options to which parties can resort in case of conflict. However, when this joint activity aims at pursuing a common goal, the successful performance of business benefits all involved parties. Therefore, when developing automated monitoring tools, one should take into account that agents may be cooperative enough to allow some counterparties’ deviations. In this paper we have introduced a model for contractual obligations. We defined them as directed obligations with livelines and deadlines. The directed aspect concerns the need to identify the agent who will be authorized to react in case of non-fulfillment. Obligation violations are now dependent on the counterparty motivation to claim them. Our approach is based on real-world evidence from business contracts (namely the United Nations Convention on Contracts for the International Sale of Goods), which denotes a flexible and even cooperative facet of trade contracts. This extends to the concept of B2B Virtual Organizations, wherein different parties come together to share a business goal that is achievable through the cooperative fulfillment of a common contract. Rule-based production systems allow applying rules in a forward-chaining way. This data-driven approach is appropriate to model a norm monitoring environment based on events. Jess is a powerful rule engine that allows a straightforward implementation of our monitoring model. The easy integration of a Jess engine with a Java application enables the development of a full monitoring tool that aims at supporting a cooperative model of business contracts enactment. We envisage to apply this framework to more complex business scenarios. ACKNOWLEDGMENTS The first author is supported by FCT (Fundação para a Ciência e a Tecnologia) under grant SFRH/BD/29773/2006. REFERENCES UNCITRAL (1980). United nations convention on contracts for the international sale of goods (cisp).
{"Source-Url": "https://repositorio-aberto.up.pt/bitstream/10216/15196/2/58314.pdf", "len_cl100k_base": 4987, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 22302, "total-output-tokens": 5704, "length": "2e12", "weborganizer": {"__label__adult": 0.0008268356323242188, "__label__art_design": 0.000934123992919922, "__label__crime_law": 0.00574493408203125, "__label__education_jobs": 0.005828857421875, "__label__entertainment": 0.00018978118896484375, "__label__fashion_beauty": 0.0003995895385742187, "__label__finance_business": 0.06903076171875, "__label__food_dining": 0.0009179115295410156, "__label__games": 0.00142669677734375, "__label__hardware": 0.001651763916015625, "__label__health": 0.0014162063598632812, "__label__history": 0.0007200241088867188, "__label__home_hobbies": 0.00035572052001953125, "__label__industrial": 0.0030155181884765625, "__label__literature": 0.0011110305786132812, "__label__politics": 0.0018358230590820312, "__label__religion": 0.0005650520324707031, "__label__science_tech": 0.28125, "__label__social_life": 0.00034046173095703125, "__label__software": 0.051422119140625, "__label__software_dev": 0.56787109375, "__label__sports_fitness": 0.0003998279571533203, "__label__transportation": 0.0021495819091796875, "__label__travel": 0.0004734992980957031}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23116, 0.0102]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23116, 0.26506]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23116, 0.91841]], "google_gemma-3-12b-it_contains_pii": [[0, 3789, false], [3789, 6329, null], [6329, 11078, null], [11078, 15465, null], [15465, 19246, null], [19246, 23116, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3789, true], [3789, 6329, null], [6329, 11078, null], [11078, 15465, null], [15465, 19246, null], [19246, 23116, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23116, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23116, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23116, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23116, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23116, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23116, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23116, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23116, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23116, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23116, null]], "pdf_page_numbers": [[0, 3789, 1], [3789, 6329, 2], [6329, 11078, 3], [11078, 15465, 4], [15465, 19246, 5], [19246, 23116, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23116, 0.0]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
6d8fc40db47a7a047412f3db975251848fe63708
Scientists and software engineers: A tale of two cultures Conference Item How to cite: For guidance on citations see FAQs © [not recorded] Version: [not recorded] Link(s) to article on publisher’s website: http://www.cs.st-andrews.ac.uk/jr/ppig08/index.html Copyright and Moral Rights for the articles on this site are retained by the individual authors and/or other copyright owners. For more information on Open Research Online’s data policy on reuse of materials please consult the policies page. Scientists and software engineers: a tale of two cultures Judith Segal Department of Computing The Open University j.a.segal@open.ac.uk Keywords: POP-I C. Ill-defined problems, POP-II A end-users, POP –II C working practices, POP-V B, case studies Abstract The two cultures of the title are those observed in my field studies: the culture of scientists (financial mathematicians, earth and planetary scientists, and molecular biologists) developing their own software, and the culture of software engineers developing scientific software. In this paper, I shall describe some problems arising when scientists and software engineers come together to develop scientific software and discuss how these problems may be ascribed to their two different cultures. 1. Introduction One major difference between most commercial software development and scientific software development lies in the complexity of the domain. Most software engineers have some intuition as to what is required from (for example) a hotel booking system or a banking system or a payroll system; few have any intuition as to what is required from (for example) a stochastic modelling system or a quantum chemistry system or a protein crystallography system. The implication of this is that the relevant scientists must be deeply engaged in the system development, either developing it themselves in their role of ‘professional end-user developers’ (see below), or in providing (and explaining) requirements, giving feedback and performing user-acceptance tests. Before I go any further, I shall define my use of the terms ‘professional end-user developers’ and ‘culture’. The term ‘professional end-user developers’ (Segal, 2007) refers to people such as scientists and engineers working in highly technical, knowledge-rich domains who develop software in order to further their own professional goals and/or those of their close colleagues. Like other end-user developers, these people do not regard themselves primarily as software developers and have little or no education or training in software engineering. Unlike most other end-user developers, however, coding per se presents them with few problems as they are used to formal languages. Turning to the term ‘culture’, the concept of culture has many aspects. In this paper, I take the term to mean the set of values and customary behaviours of an identifiable group of people, professional end-user developers and software engineers in this case. This paper draws on field studies I have undertaken with financial mathematicians, earth and planetary scientists, space scientists and molecular biologists. In section 2, I shall describe the culture within which I have observed scientists developing software for their own use and/or for the use of their close colleagues, and present a model of how this software is developed, a professional end-user development model. In section 3, I shall describe two sets of problems which I observed when software engineers worked closely with scientists in order to develop scientific software, and which arise from a cultural mismatch. In the first case, software engineers tried to impose a traditional software engineering culture on scientists. In the second, scientists expected software engineers to ascribe to the culture of professional end-user development. In section 4, I shall discuss the limitations of my field studies. Although my field studies have explored quite a variety of contexts, they are in no way comprehensive. I discuss whether other software development models would fit better with scientific software development than the traditional phased waterfall model that I observed, and also whether the characteristics of the culture described in section 2 are common across all contexts of scientists developing scientific software. Section 5 consists of a summary and conclusions. 2. The Culture of Professional End-User Development 2.1. Values As described in Segal, 2007, the most salient characteristic of the culture I saw in my field studies of financial mathematicians and earth and planetary scientists, was the low value ascribed to software development knowledge and skill compared with knowledge of the mathematical/scientific domain. People spoke in terms of ‘everybody’ knowing how to develop software; of software development knowledge being merely part of the armoury of the average scientist; of the belief that a piece of software was something that could be dashed off during a lunch hour. In these two contexts of financial mathematics and earth and planetary science, software development is something one practises at the beginning of one’s professional career. As one ascends the career ladder (by passing one’s professional exams or by publishing enough scientific papers), then one leaves software development behind to be done by one’s juniors. The situation in which this is not the case, that is, in which a professional end-user developer – or, indeed, a software engineer working within a professional end-user organisation – develops software full-time on a long-term basis, does not appear to differ significantly in the low value afforded by the organisation to software development knowledge and skill. My current field study of molecular biologists includes several interviews with a professional end-user developer whose skill in software development had been recognised during his PhD work in molecular biology and who is now working full time developing and maintaining software for his lab. Although this software is the absolute sine qua non of the lab – without the software, there would be no lab – this man feels that there is no way someone in his position could ever become head of such a lab. His belief is that such a position would always go to a traditional bench biologist, despite the fact that traditional bench biology now plays a relatively small part in the work of the lab. I also talked with a software engineer who works for a central government research facility with the express aim of providing software support for the UK scientific community. The management of this facility consists of professional end-user developers, people who primarily consider themselves to be scientists. The developer constantly finds his promotion blocked because he has not published enough scientific papers – this despite the fact that software development, and not writing scientific papers, is his remit. The developer feels that the facility’s management do not understand, and cannot judge, professional software development (as opposed to professional end-user development). He feels that the concerns and quality goals of the former are quite different from those of the latter. This is a point to which I shall return briefly in section 3.2 below. The low value ascribed to software development knowledge and skills no doubt contributes to the difficulties that professional end-user developers have in acquiring such knowledge and skills as described in Segal, 2007, despite the fact that it is assumed that ‘everybody’ knows what to do, as above. Professional end-user developers have rarely had any formal software engineering education at university. However, the same is true of many software engineers, and in fact, Kelly, 2007, notes that university software engineering courses are frequently unpopular with potential professional end-user developers since they are often taught in a way which is independent of the domain and the students are unable to make links between the software engineering as taught and their chosen science. What is more important than formal education, I think, are the learning opportunities afforded by the community of practice. My interviews indicate that software engineers acquire their knowledge and skills through a variety of means, all dependent on their being part of a community (or network) of practice of software developers. These means include working with a variety of other developers on a variety of projects and thus sharing knowledge on an informal basis, reading books and studying internet tutorials etcetera as recommended by colleagues, and going to technical conferences and short courses, the existence of which is made known through the network of practice. For the professional end-user developer observed in my field studies, this community of software development practitioners does not exist. The primary community of practice to which the professional end-user developer belongs is that of the application domain, the science. Professional end-user developers often work on their own or in very small groups and so rarely have the opportunity to share knowledge informally. In at least one of my field studies, the perception that software development knowledge is trivial and known to everyone, meant that the management was loath to spend money on resources, such as courses, designed to improve such knowledge. 2.2. Behaviours: a model of professional end-user development Figure 1 is, I suggest, the standard model of professional end-user development. I found it practised by all the professional end-user developers in my field studies. And a casual conversation on a train with a computational linguist elicited the information that he recognised it as the model he used in writing his latest substantial program in Python in order to analyse the dialogues of Plato. In this model, the developer begins with just a vague idea of what is needed. S/he quickly develops a piece of software, and then sits back and reflects on the question of whether the software does what s/he wants and how it might be extended or modified, drawing in his/her colleagues if available. The developer goes round the development/evaluation loop several times until s/he decides s/he has got a suitable release. S/he then does testing of a very cursory nature. For example, a few items of data similar to the data that will be input when the software is released, is entered into the system, and the output is checked to see that it looks broadly correct – or at least not broadly incorrect. The software is then ready to become accepted as a tool for the scientific endeavour. ![Figure 1. A model of professional end-user development (from Segal and Morris, 2008)] The salient characteristics of this model are, firstly, the lack of an upfront requirements model; secondly, the intertwining of evaluation and the identification of emergent requirements (‘Is this what I/we want?’), and finally, the cursory nature of the final testing. This model would not be taught in any software engineering course – and yet, to judge by its pervasiveness, it works. But only in a very particular context, as I shall now discuss. Starting off with a vague idea of what is needed depends on the developer having sufficient knowledge of the domain. The reliance on feedback depends on the developer being embedded in the user community. Many of us will have experienced problems in getting potential users to engage in a software development in order to give informed and reasoned feedback. Getting such feedback is much easier if you, as the developer, are just asking your mate at the next desk/bench ‘Have a look at this. What do you think?’ I have several suggestions as to why testing is so cursory. The first is to do with the low value placed on software as opposed to that placed on the science: the software is valued only insofar as it progresses the science. I suggest that scientists regard the software in the same light as any other instrument for enabling their scientific endeavours. It is argued by many philosophers and historians of science, see for example, Chalmers, 1982, that scientists assume that their instruments work unless confronted by absolutely incontrovertible evidence. Perhaps this assumption also holds for their software: the innate quality of the software is not questioned unless it becomes clear that the software is not supporting the science. The second is to do with the developer being embedded in the user community. If a scientist does find faults in a piece of professional end-user developed software, then the developer is readily at hand (either the scientist him/herself or a close colleague) to make amendments. The third suggestion is to do with the nature of scientific software and concerns the great difficulty of validating software (such as scientific software) in which the domain is only poorly understood and, in fact, the aim of the software is to advance the understanding of the domain, see, for example, Carver et al. 2007. In this case, there is simply no way in which the scientists can know whether the output from the software is correct: s/he just has to rely on her/his gut instincts that the output is not absolutely wrong. 3. Clashing cultures: some problems that arise when scientists and software enGINEERS WORK TOGETHER In this section, I shall describe, firstly, a situation in which software engineers tried to impose a traditional software engineering culture on scientists, and secondly, a situation in which a scientist assumed software engineers were working within a professional end-user development culture, as described above. 3.1. Why can’t scientists be more like software engineers? The discussion in this section is based on the field study described in Segal, 2005. The context of the field study is thus: the scientists were familiar with writing their own software in the lab to drive instruments such as spectrometers and to analyse the data coming from the instrument. They were now about to embark on a very risky endeavour: rather than pick up space material and bring it back to earth to be analysed in the lab, they were going to send an instrument up into space to do the analysis in situ and relay the results back to earth. They brought in software engineers to write a library of components which they could use to drive the instrument, and themselves had a model of the instrument in the lab which they could use to reify their requirements. The software engineers followed a waterfall-type phased model of software development as recommended by the European Space Agency. The scientists in their lab followed the model of professional end-user development as described in section 2.2 above. The first problem lay with requirements and is illustrated in Figure 2. The software engineers needed an upfront requirements document; the scientists expected most of the requirements to emerge. Other problems stemmed from the scientists being used to working within the lab, where informal face-to-face communication flourished. They were thus not used to either writing or reading formal project documents, such as requirement documents, and were thus not aware of the contents of such documents, and hence did not fully know which requirements had – or had not - been implemented. Their user acceptance testing was as cursory as that described in section 2.2. 3.2. Why can’t software engineers be more like scientists? In this section, I describe, in somewhat simplified terms, an aspect of a hitherto unpublished field study in which molecular biologists were employing software engineers to write some community software. The molecular biologists had all been at one time professional end-user developers, and some were still developing their own software. However, the community for which the software is intended is somewhat diverse and the software itself is considerably bigger than any that a professional end-user developer would tackle, and hence it was felt necessary for the scientists to employ software engineers. The first problem again lies with requirements. The molecular biologist heading the project said as he handed over a list of features to the project manager of the software engineers: ‘We know exactly what the requirements are and here is a list of them.’ Of course, the features were at too high a level for the software engineers to begin to implement. Figure 3 illustrates a hypothetical (but very realistic) instance. The scientist’s injunction to write a piece of software with a particular piece of scientific functionality is perfectly reasonable provided that the developer is a professional end-user developer. In this case, the developer knows the domain, has some intuition as to how a simple graph-matching program might work and might be used; can develop a first prototype and ask his/her peers for feedback, and generally follow the professional end-user development model. The poor software engineer, however, with no—or at best, weak—understanding of the domain, has great difficulty in proceeding. Figure 3 illustrates another clash between the expectations of professional end-user developers and software engineers. This is to do with the time that software development takes, which in turn depends on the different values and behaviours espoused by professional end-user developers and software engineers. In general, professional end-user development takes less time. The software project manager in this field study told me that, as a rough rule of thumb, his team took three times longer to produce a piece of software than the scientists expected. There are several potential reasons for this. The first concerns requirements. The establishment of requirements in professional end-user development, as illustrated in Figure 1, is absolutely integrated with the software development. In addition, the context in which professional end-user development flourishes, as described in section 2, is one in which the developer is a faithful representative of the user group, which implies that the user group is homogenous and not split into subgroups with diverse goals and behaviours. For the software engineer developing software for a diverse community (as in this field study), establishing requirements is a time consuming and difficult task. Potential users have to be persuaded to tear themselves away from their current endeavours and engage with the development of a system which they may well never use in its mature state (such potential users are often on short term research contracts). The software engineers have to ensure that the diversity of the user community is properly represented; that clashes between different branches of the community are resolved, and so on. The second concerns those issues which reflect the values of software engineering as opposed to those of professional end-user development. Foremost among these is testing. In 2.2, I discussed the fact that the cursory testing which is a feature of professional end-user development may be due to the fact that the emphasis is on the science which the software is intended to support and not on the software per se. Software engineers, on the other hand, should ideally identify the quality goals for any piece of software, and allocate testing time in accordance with these goals. For example, a quality goal might be robustness, in which case much time must be spent testing that the software does not fall over given a variety of inputs. Other issues which do not usually impact on professional end-user development include portability and maintainability. There might also be security issues when a diverse user community is involved, for example, issues of data access when users from different branches of the user community use the same system. 4. The limitations of my field studies I have undertaken a variety of field studies (Segal, 2007) in quite a variety of settings. The application domains have been in financial mathematics, earth and planetary scientists, and molecular biologists; the scientists have developed their software either in partnership with software engineers or on their own; the software developed has included software to drive instruments, model financial markets, and to store, analyse and support the interpretation of data. Across this variety, I have found a number of commonalities, such as the low value ascribed to software development knowledge and skill compared with domain knowledge and skills, and the ubiquity of the professional end-user development model. My field studies are in no way comprehensive however. For example, the software engineers in my studies never adopted agile methodologies, which, relying as they do on iterative feedback loops and face-to-face communication (see http://agilemanifesto.org/), might appear to offer more to scientific software development than the more traditional, phased, waterfall-type methods. There are experience reports in the literature of software engineers successfully engaging scientists in agile development, see, for example, Bache, 2003, and Kane, 2003. However, I am not aware of any objective field study data in this area, and, given my recent experience of co-editing a special issue of IEEE Software on developing scientific software, I am wondering whether, when scientists refer to themselves as following an agile methodology, they are not just following the iterative feedback model of Figure 1. In addition, my field studies did not cover high performance computing systems (HPCS), that is, systems in which many processors act in parallel. Such systems are commonly used in science to simulate natural phenomena which are too big or too small or too dangerous or too complex to be investigated in vivo. There has been a lot of interest in researching HPCS in the USA recently, spurred by a large, multi-phased, ongoing DARPA project (see www.highproductivity.org). This project was instigated in response to a concern that scientific productivity using HPCS systems did not seem to improve commensurate with the rate at which the capabilities of the hardware improved. The aim of the project is thus to improve scientific productivity by a factor of ten, by dint of improvements in both software and hardware. The exact nature of the concept of ‘scientific productivity’ appears not to have been completely explicated, however. The DARPA project has generated many field studies of scientists being deeply involved in the development of software simulations, see, for example, Carver et al., 2007, Basili et al., 2008. The contexts in which HPCS are used by scientists vary greatly, and Basili et al., 2008, allow that their field studies are not comprehensive – and also acknowledge that even within their field studies, they found a great deal of variation. However, their field studies demonstrate similarities with mine. For example, they found that the science, rather than the software, was paramount, and they found the same reliance on emergent requirements and difficulties with testing as did I. However, some of their findings were different from mine. For example, the relatively low status of software development that I found universal, was not always found in their case studies. I was told that sometimes physicists who developed HPCS thought of themselves as forming an elite among physicists, and, moreover, their opinion of themselves was based not on what they could bring to physics but rather on their adeptness in employing programmers’ tricks to support parallel processing. This is totally counter to my findings. I was given a possible explanation for this phenomenon, which is that these physicists regard physics as having essentially three branches of equal worth: theoretical, experimental, and in silico (that is, software simulations). This does not appear to be the case in my field studies where (except in the case of the financial mathematicians) software is seen as a supporting tool for scientific enquiry rather than as providing a model of science which can be queried directly. In the case of the financial mathematicians in my field studies who were developing models, software development tends to be undertaken by students (in the professional sense, that is, people who had not yet passed a long series of professional exams), and this may account for the lack of value afforded to it. Given the importance of simulations in science, it is clear that HPCS represent a domain of scientific software development into which I am going to have to look more closely. 5. Summary and conclusions My field studies have identified two characteristics of a culture of scientific professional end-user development: the low value given to software development knowledge and skill compared to domain knowledge, and a model of professional end-user development. Judged by its pervasiveness, this latter is very successful, though only in a particular context. I have identified the characteristics of this context as being the following: - The developer is embedded in the user community. - The user community is cohesive. - The requirements are not fully established at the outset. - The value of the software lies in the extent to which it progresses science. I have reported the clashes which occurred when software engineers tried to impose their culture of traditional software development onto scientists and vice versa. My field studies, of necessity, illustrate only some of the variety of scientific software development. Other field studies have investigated the situation in which high performance computing systems are developed for simulation purposes. These studies have confirmed my findings of the primacy of science over software, the importance of emergent requirements and the difficulties of testing. This research is important because I take it as a given that software engineers cannot hope to provide effective tools, technologies and methods for improving scientific software development without first... understanding the cultural context, the values and customary behaviours, in which this development takes place. As I describe in section 3, lack of understanding of this context can lead to major problems. There is still much work to be done in this area. A complete research agenda would, I argue, encompass the following: 1. The identification of the salient dimensions against which contexts of scientific computing might be characterised. Such dimensions might include the following specific to scientific computing: whether the scientists are developing the software on their own, and if not, the degree to which software engineers are involved, and the value ascribed to software development in the user community. Other dimensions might include: whether the user community is homogenous or diverse, the size of the development team, the longevity of the code, etcetera. 2. The identification of those established techniques in software engineering which might assist scientific software developers. 3. The establishment of a mapping between software techniques identified in 2 and contexts characterised along the dimensions identified in 1. 4. The identification of the means by which scientists might be made aware of those software engineering techniques and tools which might be relevant to their development. This latter point is especially significant given the difficulties of sharing software development knowledge among professional end-user developers, as discussed in section 2.1 above. This research agenda might appear daunting but I hope that this paper and others like it might contribute a significant first step. 6 Acknowledgements I should like to acknowledge my deep gratitude to all those software engineers and scientists who took part in my field studies. They were invariably patient and reflective, and themselves contributed great insights. 7. References Chalmers, A.F., 1982. *What is this thing called science?* Open University Press, Milton Keynes, UK
{"Source-Url": "http://oro.open.ac.uk/17671/1/PPIG_08Segal.pdf", "len_cl100k_base": 5342, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 19920, "total-output-tokens": 6207, "length": "2e12", "weborganizer": {"__label__adult": 0.0004355907440185547, "__label__art_design": 0.0003561973571777344, "__label__crime_law": 0.0003790855407714844, "__label__education_jobs": 0.006870269775390625, "__label__entertainment": 0.00010269880294799803, "__label__fashion_beauty": 0.00018465518951416016, "__label__finance_business": 0.0004429817199707031, "__label__food_dining": 0.0004453659057617187, "__label__games": 0.0006208419799804688, "__label__hardware": 0.0008358955383300781, "__label__health": 0.0007238388061523438, "__label__history": 0.0003364086151123047, "__label__home_hobbies": 0.00012314319610595703, "__label__industrial": 0.000370025634765625, "__label__literature": 0.0007143020629882812, "__label__politics": 0.0003483295440673828, "__label__religion": 0.0005769729614257812, "__label__science_tech": 0.037353515625, "__label__social_life": 0.000308990478515625, "__label__software": 0.006317138671875, "__label__software_dev": 0.94091796875, "__label__sports_fitness": 0.0003273487091064453, "__label__transportation": 0.0006327629089355469, "__label__travel": 0.00022077560424804688}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29618, 0.01343]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29618, 0.79702]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29618, 0.95989]], "google_gemma-3-12b-it_contains_pii": [[0, 783, false], [783, 4453, null], [4453, 9167, null], [9167, 11828, null], [11828, 15749, null], [15749, 17246, null], [17246, 22027, null], [22027, 26410, null], [26410, 29618, null]], "google_gemma-3-12b-it_is_public_document": [[0, 783, true], [783, 4453, null], [4453, 9167, null], [9167, 11828, null], [11828, 15749, null], [15749, 17246, null], [17246, 22027, null], [22027, 26410, null], [26410, 29618, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29618, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29618, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29618, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29618, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29618, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29618, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29618, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29618, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29618, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29618, null]], "pdf_page_numbers": [[0, 783, 1], [783, 4453, 2], [4453, 9167, 3], [9167, 11828, 4], [11828, 15749, 5], [15749, 17246, 6], [17246, 22027, 7], [22027, 26410, 8], [26410, 29618, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29618, 0.0]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
c5d67fc17d3e9987abd3c16da89f90ce3a5711ed
A posteriori taint-tracking for demonstrating non-interference in expressive low-level languages Peter Aldous University of Utah peteya@cs.utah.edu Matthew Might University of Utah might@cs.utah.edu February 3, 2016 Abstract We previously presented a theory of analysis for expressive low-level languages that is capable of proving non-interference for expressive languages. We now present an implementation of that analysis, empirical evaluation of that implementation and optimizations to improve performance. In the course of developing an optimization, we provide a independence result for the taint-flow analysis that drives tracking of information. In particular, we show that the taint-tracking can be derived from the results of a taint-free analysis. In addition to improving performance, this independence broadens the applicability of the underlying approach to information-flow analysis. Finally, we present metrics for our analysis on a suite of applications which demonstrate improvements in its performance. 1 Introduction In our SAS 2015 paper [1], we presented a theory of analysis suitable for proving non-interference (or the absence of information flows) in expressive low-level languages, such as Dalvik bytecode [11]. Dalvik bytecode, like many modern low-level languages, contain objects, virtual methods, exceptional flow, conditional jumps, and mutation. A proof of non-interference could be used to demonstrate that user data, such as passwords or GPS location, are not transmitted to third parties. It could also be used to verify that a cryptographic primitive does not leak its key or to verify that sandboxed applications cannot communicate with each other. This analysis is distinct from analyses that focus on identification of bugs; it is successful not when it helps analysts to identify and fix problems but when it helps analysts to prove the absence of problems. In order to help analysts to make guarantees about programs, this automated analysis eliminates false negatives (and, consequently, permits false positives). Our theory of analysis, as presented, had no implementation or performance metrics. We left the question of tractability open. In order to address this question, we present an implementation (named Data) of our analysis. We discuss the details of its implementation and test it on Android applications. We then discuss optimizations to the analysis that make it tractable on an increasing number of these applications. 1.1 Contributions - We present an implementation of an information-flow analysis for Android applications. - We discuss optimizations made to the analysis and present performance metrics from executions of the analysis on several Android applications. - We demonstrate that the analysis generates false positives but that it is sufficiently accurate to help analysts to find information flows and prove their absence. 1.2 Structure Section 2 describes the techniques upon which our analysis is built as they exist in the literature. Section 3 elaborates on the implementation of Data, including its optimizations to the analysis as presented. Section 4 measures the performance of the different variants of the analysis. Section 5 elaborates on the significance of the results. Section 6 discusses related work. 2 Background Our theory of analysis uses a small-step abstract interpreter with components added to prove non-interference. Subsection 2.1 describes small-step abstract interpretation and subsection 2.2 explains non-interference. 2.1 Small-step abstract interpretation The CESK [9] evaluation model represents states in an interpreter’s execution as tuples of control (C), environment (E), store (S), and continuations (K). The control represents where the interpreter is in the program, the environment maps variables to addresses, the store maps addresses to values, and a continuation contains the information used to return from a function. In an imperative program, these terms roughly approximate (respectively) the program counter, the frame pointer, the heap, and the stack. Van Horn and Might [25] demonstrated that CESK interpreters can be turned into small step abstract interpreters by abstracting their state spaces so that they can be guaranteed to be finite and modifying their transition rules to permit for multiple successors to each state. An abstract CESK interpreter produces not a linear program trace but a graph of abstract states that model all possible executions from a given initial abstract state. A finite state space is not sufficient to guarantee termination. It is also necessary to avoid calculating successors for the same state repeatedly. Since the successors to a state are a function of the members of the state (and, in some cases, immutable values such as the program being interpreted), the successors of a state will always be the same and it is never necessary to repeat a calculation. As such, small-step abstract interpreters can be guaranteed to terminate. Instead of interpreting until there are no more successors, interpretation proceeds, ignoring already-explored states, until no new states can be found. The analysis terminates with a finite graph of all reachable abstract states. A sound small-step abstract interpretation represents all possible program executions in its finite state space. Typically, soundness is proven by proving simulation. Simulation proofs show that abstract interpretation simulates concrete interpretation by showing that the relationship between a concrete state $\varsigma$ and its abstraction $\hat{\varsigma}$ holds for their respective successors. With a concretization function $\gamma$, we can formalize the relationship between $\varsigma$ and $\hat{\varsigma}$: $\varsigma$ is in $\gamma(\hat{\varsigma})$. If $\varsigma'$ is the concrete successor to $\varsigma$ and $\hat{\varsigma}'$ is some abstract successor to $\hat{\varsigma}$, then $\varsigma'$ must be in $\gamma(\hat{\varsigma}')$. Given some initial concrete state $\varsigma_0$ and an initial abstract state $\hat{\varsigma}_0$ whose concretization includes $\varsigma_0$ and a proof of this inductive property, we can conclude that the abstract state graph includes all possible behaviors that the concrete trace could exhibit. More formally, our inductive property states that if $$\varsigma \rightarrow \varsigma' \text{ and } \varsigma \in \gamma(\hat{\varsigma})$$ then there exists an abstract state $\hat{\varsigma}'$ such that $$\hat{\varsigma} \xrightarrow{\cdot} \hat{\varsigma}' \text{ and } \varsigma' \in \gamma(\hat{\varsigma}')$$ ### 2.2 Non-interference Traditional taint tracking mechanisms apply a security type or label, also called a taint, to sensitive values, such as a phone’s location or a user’s password. Whenever a new value is written, it derives its security type from the values upon which it depends. Although many security types are binary, where values with a high security label are sensitive and must be protected and values low security label are not sensitive, Denning demonstrated that security types may be rich lattices [5]. These techniques are effective for explicit information flows but fail to detect implicit information flows, which depend on control flow to leak information. Consider the Java code snippet in Listing 1, which demonstrates an implicit flow; despite the fact that $y$ is written with constants, it will always get the same value as $x$. Traditional taint tracking mechanisms track only explicit flows and fail to catch implicit flows. In order to track implicit information flows, taints can also be applied to the program’s context, per Denning and Denning [6]. Denning and Denning claimed that a static analysis of postdominance in the control flow graph would allow context tainting to apply to languages with arbitrary goto statements but did not prove non-interference. Furthermore, their analysis does not include function calls or exceptional control flow. We demonstrated that such an analysis is possible [1], although it was necessary to modify the analysis to properly handle these rich language features. In order to demonstrate the absence of information flows, even in the presence of exceptional control flow and other rich language features, we proved non-interference, which is the property that sensitive information cannot affect (or interfere with) behaviors that are visible to an attacker. Since some programs do not satisfy the requirement of non-interference, the proof of non-interference is a proof that any interference will be identified by the small-step abstract interpreter. Our analysis uses the suggestion of Denning and Denning to calculate postdominance, but they use a richer graph (called the execution point graph) than the control flow graph. Nodes in an execution point graph are pairs of a code point and a natural number, which is the depth of the stack. As we demonstrated, a subtle class of information leak does exist (in languages with functions) that could elude an analysis that uses a control flow graph to detect implicit flows. We performed abstract interpretation with taint flow analysis and propagate taints from context. After abstract interpretation, we calculated the execution point graph by projecting the state graph. Then, we used the execution point graph to demonstrate that some statements occur at points in the program unaffected by certain branch statements. In these cases, the taints in question can be removed or ignored without allowing interference. 3 Implementation Our analysis extends the state space of an abstract CESK machine to include a taint store, which acts in parallel with the store but stores taints instead of values, and a context taint set, which tracks the points in the program where branches occurred that depended on tainted values. These modifications allowed us to prove non-interference but added considerable complexity to abstract interpretation. The addition of the taint store makes no difference in the asymptotic complexity of abstract interpretation (assuming the size of the set of taint values is constant), as the taint store is essentially a small extension to the store. There can be no more taint values than abstract values. The context taint set, on the other hand, can add considerable complexity. Unlike the taint store, the context taint set is a fully fledged additional component to each state, which expands the size of the state space and can greatly expand the size of the state graph for a particular program. Worse still, widening the store has no effect on the context taint set and there is no obvious way to widen the context taint set separately. Subsection 3.1 details the salient features of the implementation. Subsection 3.2 addresses the expansion of the state graph size due to context taint by removing taint components from the state space and performing taint flow analysis after abstract interpretation. 3.1 Implementation details 3.1.1 Modularity A canonical abstract interpreter in the style of Abstracting Abstract Machines [25] has one allocation function. Since Data operates on Dalvik bytecode, it needs abstract allocators for frame pointers, continuation addresses, array addresses, and object addresses. Additionally, Dalvik bytecode has four primitive types: int, long, float, and double and each primitive type requires abstraction. In the spirit of the work of Jenkins, et al. [13], using concrete allocators and concrete primitive types makes Data a concrete interpreter. Allocators for different components of the interpreter are all defined as abstract classes so that they can be exchanged. Continuation addresses can be allocated with a 0CFA allocator or a 1CFA allocator. Other allocators can easily be added to the source code. Primitive type abstractors are also abstract classes. Implementations exist for each primitive type that represent concrete values, sets of concrete values, and the set of all values. Additional abstractions can easily be written, although it is common for multiple abstraction types to co-exist and it may be necessary to augment the existing abstractors so they can perform arithmetic operations with the additional abstractors. Naturally, any combination of these allocators and abstractors can be chosen with command-line parameters. 3.1.2 Register de-allocation Although Dalvik bytecode uses virtual registers, the Android SDK allocates registers sparingly. Register allocation algorithms use called liveness analysis to determine which registers must co-exist and which registers can be combined. Liveness analysis goes backwards through the program. When a variable is read, it is “live”; in other words, it has been read. When it is written, it is “dead”; portions of the program above its assignment cannot see the result of this assignment. Any two variables that are live at the same time are said to Listing 2: A sample program for register allocation ```java x = 3; // {} y = x; // {x} z = 4; // {y} x = 5; // {y, z} System.out.println(y); // {y, z} System.out.println(z); // {z} ``` interfere (this is not to be confused with non-interference as used by information flow). Variables that interfere with each other cannot share the same register. We illustrate liveness analysis with a simple program snippet in Listing 2. The last statement reads \( z \), so our live set is \( \{ z \} \). The previous statement adds \( y \) to our live set: \( \{ y, z \} \). Writing to \( x \) does nothing, as \( x \) is not in the live set. The statement prior writes to \( z \), so \( z \) is removed from the set and \( y \) is left alone: \( \{ y \} \). The previous statement “kills” \( y \) by writing it and “generates” \( x \) by reading it, so the live set is \( \{ x \} \). Finally, the first statement kills \( x \) and the live set is left empty. Since \( x \) and \( z \) are never in the live set together, they can be combined into a single register during allocation. If they are combined, \( y \) must be allocated to a different register because the live set includes both \( y \) and \( z \) at the second-to-last line of code. Uses and definitions of variables can be combined into use-def chains. In the example in Listing 2, \( x \) would be put into two separate chains. The first includes its definition on the first line and the other includes its definition before the `println` statements. Separating distinct uses of variables enables further optimizations; the different use-def chains for \( x \) can be allocated independently. Our analyzer performs liveness analysis, but in reverse: it creates use-def chains of registers and uses these chains to create addresses for stack values in the abstract store. By separating registers into use-def chains, we prevent values from merging in the abstract store that would otherwise be conflated. ### 3.1.3 Ancillary components In order to discover entry points, it is necessary to make sense of Android’s binary XML formats. Each Android application has a manifest file called `AndroidManifest.xml` that contains a list of its entry points. It also contains a resource table in a file called `resources.arsc`, which in turn identifies different layout files. Layout files contain descriptions of the various UI components, which in turn define additional entry points; the press of a button or the entry of text into a field may be necessary in order to complete some information flows. Each of these files comes in Android’s binary format, which is created and read by copying C structs to and from memory. In order to ensure that all program behaviors are analyzed in the absence of a main function, Data uses **entry-point saturation**, which was first presented 3.2 Optimization 3.2.1 A posteriori taint tracking The context taint set in our original analysis grows monotonically. We removed context taints in postprocessing, after calculating the entire state graph. In our new analysis, we move all of the information flow calculations to postprocessing. Since any additional steps taken during abstract interpretation do not affect state graph exploration, separating abstract interpretation and information flow analysis allows known modifications to small-step abstract interpretation, such as store widening, to be used to their full effect. In order to preserve generality, the abstract interpreter must keep track of the relationships between addresses read and written at each state. This bookkeeping is done outside of the state space and allows the abstract interpreter to use arbitrary allocators, as described by Might and Manolios [19], without making it impossible for the taint tracking mechanism to correctly identify where information flows based on the state graph alone. Since the entire state graph (and, therefore, the entire execution point graph) is available to the information flow analysis, it is not necessary to propagate context taints from their origins to the end of the program and remove or ignore taints later. Instead, the taint tracking mechanism can avail itself of the fully-formed execution point graph to calculate which assignments require taint from context. The state graph, together with the annotations about addresses read and written at each state, contain enough information to reconstruct all of the program’s behaviors. As such, it is possible to construct the same information flows as in the original model. Propagation of taints uses a fixed point algorithm, much like graph exploration. At each state, taints at addresses read propagate to the addresses they influence. Similarly, taints are added to addresses from context taint whenever writes occur. Taint propagation operates in the same style as in our original analysis except that no spurious context taints are propagated. The propagation algorithm must also track the context taints seen at code points that invoke functions in order to faithfully recreate context taint when removing frames from the stack (as happens upon returning from a function or throwing an exception). Much like state exploration, where a state may have multiple successors, taint propagation can proceed to multiple successor states. Like small-step abstract interpretation, our taint tracking algorithm proceeds to a fixed point; when the propagation reaches a state it has seen previously, when its taint store is subsumed by the taints previously seen at that state, when the taints in the context have all been seen at the state’s execution point, and when the context taints in the stack are subsumed by context taints previously seen on the stack, further propagation will generate no new information. 3.2.2 Non-interference in a posteriori taint tracking The proof of non-interference is essentially the same for a posteriori taint tracking method as in our original theory of analysis. It hinges on the same notion of similarity and proceeds along the state graph as before. Traces in this proof are series of concrete states, as in the original formulation. Each trace follows some path through the (already explored) state graph. As in the original proof, taint propagation may traverse fewer states than are members of its corresponding concrete trace because the fixed point algorithm may prove that no additional information can be found. As before, this does not invalidate soundness or non-interference. 4 Experimental results 4.1 Methodology The Automated Program Analysis for Cybersecurity (APAC) program performed several experiments in which analysts were presented with Android applications and were tasked with identifying malware that had been implanted in the applications. We tested both the implementation of the analysis from Aldous and Might and the implementation of a posteriori taint tracking on each application from one of these experiments. Leaks that depend on the inner workings of library methods were considered out of scope. In order to measure accuracy, we paired the occurrences of sources and of sinks in the apps. Since these pairings number in the millions, it was impossible to verify each one by hand. Instead, we counted the pairings in Filterize, the application with the smallest number of sources and sinks used (16 sources and 19 sinks for a total of 304 pairings, with four of those flows relying on library methods and therefore being out of bounds). Of the twelve applications chosen, six timed out for all analyses. 4.2 Results Figure 1 shows the time each analysis took, the number of states discovered by each analysis, and the number of lines of code covered by abstract interpretation. Applications timed out after 24 hours. Blank cells indicate that the analysis timed out on the application in question. Figure 2 details the accuracy of the analysis on Filterize. The data show that the separation of abstract interpretation from taint tracking improves the tractability of the analysis, allowing it to successfully analyze six applications instead of four, and to analyze the programs whose analyses were tractable more quickly. As its theoretical basis suggests, there are no false negatives in the analysis. Wherever a source can reach any sink, the analysis reports that it can reach all of the sinks that can be reached. Overtainting is unsurprising, since the abstract ### Table 1: Performance statistics for analyses on APAC apps <table> <thead> <tr> <th>App name</th> <th>Runtime in state</th> <th>States found in state</th> <th>LOCs covered in state</th> <th>LOCs covered a posteriori</th> </tr> </thead> <tbody> <tr> <td>BattleStat</td> <td>233 s</td> <td>221</td> <td>188</td> <td>188</td> </tr> <tr> <td>chatterbocs</td> <td>3034 s</td> <td>1067</td> <td>447</td> <td>447</td> </tr> <tr> <td>Filterize</td> <td>–</td> <td>80238 s</td> <td>6844</td> <td>1375</td> </tr> <tr> <td>Filterize (1CFA)</td> <td>–</td> <td>27917 s</td> <td>3944</td> <td>1255</td> </tr> <tr> <td>ICD9</td> <td>2086 s</td> <td>593</td> <td>412</td> <td>415</td> </tr> <tr> <td>rLurker</td> <td>1917 s</td> <td>706</td> <td>572</td> <td>572</td> </tr> <tr> <td>Valet</td> <td>–</td> <td>43822 s</td> <td>2382</td> <td>1254</td> </tr> </tbody> </table> ### Table 2: Accuracy of analysis of Filterize <table> <thead> <tr> <th>Analysis</th> <th>True positives</th> <th>True negatives</th> <th>False positives</th> <th>False negatives</th> </tr> </thead> <tbody> <tr> <td>0CFA</td> <td>4</td> <td>190</td> <td>106</td> <td>0</td> </tr> <tr> <td>1CFA</td> <td>4</td> <td>224</td> <td>72</td> <td>0</td> </tr> </tbody> </table> interpretation performed was 0CFA; 0CFA merges control flow upon returning from a function. Even 0CFA, with its lack of precision, succeeds in demonstrating that there are no information flows in 190 (62.5%) of the 304 source/sink pairs in Filterize. This alleviates more than half of the workload for a human analyst charged with proving the absence of information flows. Since merging seemed to be a significant factor in the precision of the analysis on Filterize, we ran 1CFA on Filterize. 1CFA was more precise, so it covered many fewer lines of code and discovered many fewer states. As a result, it was significantly faster in a posteriori taint tracking (it timed out in in-state taint tracking with both allocators). Its tracking of information flows was also more precise; 1CFA demonstrated that there were no information flows in 224 (73.7%) of the source/sink pairs in Filterize. ### 5 Discussion #### 5.1 Assisted assurance Unlike many analyses, Data is not designed to identify bugs or vulnerabilities. Instead, it is designed to help experts to prove their absence. As such, its utility is not usefully measured by comparisons against tools that identify bugs. It is useful, rather, when it decreases the amount of work that must be done to verify the absence of information flows. 5.2 Analysis-agnostic non-interference The proof of non-interference requires a sound state graph and, since allocators may be non-deterministic, information about the addresses written and read at each state (or, for a less precise analysis, at each code point in the program or even at any point in the program). As mentioned previously, this separation allows store widening to be effective. However, the separation of these two analyses is powerful enough to permit much more than widening, as the proof of non-interference is agnostic to the style of analysis performed. As a result, a more precise analysis such as PDCFA \cite{7} could be performed on a program and the non-interference proof would still apply. The same is true of optimizations, such as abstract garbage collection \cite{21}. Our proof of non-interference, like that in our original theory of analysis, relies on induction on the structure of Dalvik bytecode. It relies particularly on the fact that Dalvik’s instructions allow only four stack operations: no change, a single push, a single pop, and an arbitrary number of pops (for thrown exceptions). However, it may be possible to generalize the proof to any language as long as the language uses only combinations of pushes and pops in its semantics. Even analyses of languages with call/cc could prove non-interference with this technique. Additionally, if the bookkeeping during analysis were expanded to include stack behaviors, including each execution point visited when searching for an exception handler, it might be possible to perform taint tracking on any sound state graph for any analysis on any language and get a proof of non-interference without additional theoretical work. 5.3 Generalized state graph postprocessing We originally created an analysis by extending the state space of a CESK machine with data that do not affect the execution of the program. Subsequently, we improved its performance drastically by removing the additional information from the state space and calculating it from a finished state graph. When designing future analyses based on small-step abstract interpreters, it is likely that it will be similarly practical to perform the analysis after abstract interpretation rather than extending the state space. For example, it is likely that this same technique could be applied to abstract counting \cite{20}. 6 Related work Our analysis is an implementation of the analysis presented by Aldous and Might \cite{1}. It builds upon prior work in small-step abstract interpretation, such as the seminal work by Van Horn and Might \cite{25} and uses entry-point saturation \cite{17}. Sabelfeld and Myers \cite{24} present a succinct summary of the concepts in information flow tracking. Denning \cite{5} and, later, Denning and Denning \cite{6} pioneered work in taint tracking. Subsequent papers applied and expanded on their work, such as the work by Volpano, Irvine, and Smith [27] that established that the analysis presented by Denning and Denning is sound. Volpano and Smith [28] extend the work of Denning and Denning so it handles some termination leaks and some exceptional flow. Both of these papers use languages without conditional jumps, making it impossible to apply them to Dalvik bytecode or similar languages without significant adaptation. Many analyses are suitable for finding information flows but not for proving their absence. Kim et al. [16] analyze Dalvik bytecode but do not address implicit flows. Chang, Streiff, and Lin present a tool that transforms C programs by enforcing policies but also address only explicit flows. Arzt et al. [2] present FlowDroid, an analyzer for Android applications. Their paper states that they do not address implicits, although a blog post claims that they have since added support for them. Xu, Bhaktar, and Sekar [29] transform C programs in a way that tracks information flows. They track some implicits but not all of them. Accordingly, they do not claim or prove non-interference. Kang, et al. [15] perform an analysis on Windows x86 binaries but specifically allow for false negatives. Liang and Might [18] analyze Python programs and detect explicit information flows. Venkatakrishnan, et al. [26] prove non-interference with a dynamic analysis that can terminate a program before it leaks information. Their analysis targets a language similar to Jif; it is a simple imperative language that uses security annotations and that lacks exceptional control flow. TaintDroid [8] is a dynamic extension to Android that tracks explicit flows. Significantly, TaintDroid tracks information in memory and in storage. It does not prevent information leaks but may detect them. Jia, et al. [14] present a dynamic analysis that allows programmers to provide annotations that are enforced dynamically. Myers [23] created JFlow, which extends Java and allows programmers to provide annotations which it enforces. JFlow is a hybrid of static and dynamic analyses and relies on branches with syntactic bounds. It permits several species of covert flows. There are analyses that guarantee non-interference. In every case, the analysis targets a language without features common to expressive low-level languages and, as such, cannot guarantee non-interference on programs written in these languages. Giacobazzi and Mastroeni [10] prove non-interference in IMP, which lacks exceptional control flow, including the ability to break from a loop. Askarov, et al. [3] and Moore, Askarov, and Chong [22] prove non-interference in Jif, a Java-like language with security annotations that lacks exceptional control flow. Barthe and Rezk [4] prove non-interference in a language modeled after the JVM that lacks function calls. As such, exceptions are reduced to jumps and their analysis cannot be applied to languages with functions and exceptional flow. The specification for Dalvik bytecode [11] and for the dex file format [12] provide details of the languages and their semantics. 7 Conclusion Our original theory of analysis is theoretically sound. Our implementation shows that, with some optimizations, it is effective at diminishing the work that an analyst must do to prove the absence of information flows in moderately sized Android applications. Our analysis also demonstrates two possible theoretical results: it may be possible to completely separate the proof of non-interference from the language being analyzed and from the analysis performed, allowing any analysis to add trivial bookkeeping and a postprocessing step in order to produce a proof of non-interference. It may also be possible to perform other refinements, such as abstract counting, to small-step abstract interpretation a posteriori, thus yielding the same optimizations while shrinking the state space. Additionally, we have shown that any sound state graph, combined with information about addresses written and read during the course of execution, can be used to prove non-interference without the need for additional theoretical work. Other analyses may also be performed a posteriori. References
{"Source-Url": "http://spw16.langsec.org/papers/aldous-aposteriory-taint-tracking.pdf", "len_cl100k_base": 6499, "olmocr-version": "0.1.49", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 33316, "total-output-tokens": 9185, "length": "2e12", "weborganizer": {"__label__adult": 0.0005350112915039062, "__label__art_design": 0.0003154277801513672, "__label__crime_law": 0.0008726119995117188, "__label__education_jobs": 0.0003228187561035156, "__label__entertainment": 7.212162017822266e-05, "__label__fashion_beauty": 0.0002092123031616211, "__label__finance_business": 0.0002160072326660156, "__label__food_dining": 0.00041961669921875, "__label__games": 0.0007648468017578125, "__label__hardware": 0.0014514923095703125, "__label__health": 0.0007619857788085938, "__label__history": 0.00029778480529785156, "__label__home_hobbies": 9.715557098388672e-05, "__label__industrial": 0.0004718303680419922, "__label__literature": 0.00033974647521972656, "__label__politics": 0.00046753883361816406, "__label__religion": 0.000568389892578125, "__label__science_tech": 0.0343017578125, "__label__social_life": 9.363889694213869e-05, "__label__software": 0.005279541015625, "__label__software_dev": 0.95068359375, "__label__sports_fitness": 0.0003960132598876953, "__label__transportation": 0.0007052421569824219, "__label__travel": 0.00021946430206298828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38368, 0.03474]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38368, 0.23291]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38368, 0.89187]], "google_gemma-3-12b-it_contains_pii": [[0, 2219, false], [2219, 4534, null], [4534, 7725, null], [7725, 10142, null], [10142, 12945, null], [12945, 15786, null], [15786, 18731, null], [18731, 21368, null], [21368, 24194, null], [24194, 27047, null], [27047, 30200, null], [30200, 32674, null], [32674, 34968, null], [34968, 37427, null], [37427, 38368, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2219, true], [2219, 4534, null], [4534, 7725, null], [7725, 10142, null], [10142, 12945, null], [12945, 15786, null], [15786, 18731, null], [18731, 21368, null], [21368, 24194, null], [24194, 27047, null], [27047, 30200, null], [30200, 32674, null], [32674, 34968, null], [34968, 37427, null], [37427, 38368, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38368, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38368, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38368, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38368, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38368, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38368, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38368, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38368, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38368, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38368, null]], "pdf_page_numbers": [[0, 2219, 1], [2219, 4534, 2], [4534, 7725, 3], [7725, 10142, 4], [10142, 12945, 5], [12945, 15786, 6], [15786, 18731, 7], [18731, 21368, 8], [21368, 24194, 9], [24194, 27047, 10], [27047, 30200, 11], [30200, 32674, 12], [32674, 34968, 13], [34968, 37427, 14], [37427, 38368, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38368, 0.07927]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
a8cd2f84c465f317f25d680460d50401660c303b
Hierarchical conditional dependency graphs as a unifying design representation in the CODESIS high-level synthesis system Apostolos Kountouris, Christophe Wolinski To cite this version: Apostolos Kountouris, Christophe Wolinski. Hierarchical conditional dependency graphs as a unifying design representation in the CODESIS high-level synthesis system. 13th International Symposium on System Synthesis (ISSS ’00), Sep 2000, Madrid, Spain. IEEE Computer Society, pp.66-71, 2000, <10.1109/ISSS.2000.874030>. <hal-00545528> HAL Id: hal-00545528 https://hal.archives-ouvertes.fr/hal-00545528 Submitted on 10 Dec 2010 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Hierarchical Conditional Dependency Graphs as a Unifying Design Representation in the CODESIS High-Level Synthesis System Apostolos A. Kountouris MITSUBISHI ELECTRIC ITE 80, Av. Des Buttes de Coesmes 35700 Rennes, FRANCE kountouris@tcl.ite.mee.com Christophe Wolinski IRISA Campus Universitaire de Beaulieu F-35042 Rennes CEDEX, FRANCE wolinski@irisa.fr Abstract In high-level hardware synthesis (HLS) there is a gap on the quality of the synthesized results between data-flow and control-flow dominated behavioral descriptions. Heuristics destined for the former usually perform poorly on the latter. To close this gap, the CODESIS interactive HLS tool relies on a unifying intermediate design representation and adapted heuristics that are able to accommodate both types of designs as well as designs of a mixed data-flow and control-flow nature. Preliminary experimental results in mutual exclusiveness detection and in efficiently scheduling conditional behaviors, are encouraging and prompt for more extensive experimentation. 1. Introduction The topic of efficiently scheduling conditional behaviors having a complex conditional structure, has been thoroughly investigated in previous research work mainly because traditional DFG based heuristics do not efficiently handle this kind of descriptions [1]. Several better adapted heuristics were proposed ([1], [2], [3], [4], [5], [6]). The quality of their results depends heavily on the ability to exploit conditional resource sharing ([2], [4], [6], [7]) and speculative execution ([3], [5], [16], [17]) possibilities as well as shorter path lengths using node duplication techniques [3]. In resource constrained scheduling these techniques permit to better utilize the hardware resources in the datapath and obtain better schedules which result in shorter execution paths and less control logic. An important issue, also underlined in previous work ([9], [10]) relates to the effects of the syntactic variance of the input descriptions, on the synthesis results. These negative effects intervene in two distinct but interrelated levels as far as scheduling conditional behaviors is concerned; mutual exclusiveness detection and operation scheduling. CDFG based mutual exclusiveness detection techniques [3], [11] using the structure of the input description, produce different schedules for semantically equivalent but syntactically different descriptions. This is due to the variability on the amount of detected mutual exclusiveness [9]. Furthermore, CFG-based scheduling (i.e. PBS [6]) is very sensitive to the statement order in the input description. From the above it is clear that efficient HLS for control dominated designs relies on the combination of the above techniques and in effectively coping with the problem of syntactic variance. 1.1. A unifying approach In our previous work of [22], [19] we aligned with the view supported by others [5], [8], [9], in advocating for the need of more flexible internal design representations, to optimize the HLS results and effectively handle both control and data flow dominated designs. In this paper it is explained why the adoption of an intermediate design representation, like the Hierarchical Conditional Dependency Graph (HCDG) unifies and enhances the high-level synthesis of behavioral descriptions. Unification is mainly achieved because the HCDG is well adapted to describe both control-flow and data-flow designs. Representing control and data flow in a uniform manner is key to efficient scheduling/allocation heuristics that combine the aforementioned optimization techniques under a single framework. Thanks to its origins in formal specification the HCDG constitutes a formal framework on which HLS design activities can be optimized and freed from the negative effects of structural syntactic variance (if nesting, order). Though benchmark results are a good indication on the interest of the proposed approach, further refinement and validation on larger designs is needed. To this end the CODESIS interactive synthesis tool has been developed. 2. The HCDG internal design representation The HCDG [20] is a special kind of directed graph that represents data and control dependencies from a uniform dataflow perspective. It consists of the Conditional Dependency Graph (CDG) and the Guard Hierarchy (GH). To better illustrate the notions of the HCDG a small example will be used throughout this paper. Taken from [8], its C-like representation is shown in figure 1 and its HCDG in figure 2. For details on the HCDG construction process the interested reader is referred to [19]. process jian(a, b, c, d, e, f, g, x, y) in port a[8], b[8], c[8], d[8], e[8], f[8], g[8]; in port x, y; out port u[8], v[8]; { static T1; static T2[8], T3[8], T4[8], T5[8]; T1 = (a + b) < c; T2 = d + e; T3 = c + 1; if (y) if (T1) u = T3 + d; /*u1*/ else if (x) u = T2 + e; /*u2*/ else u = T5 + g; /*u3*/ else T4 = T3 + e; T5 = T4 + f; u = T5 + g; /*u3*/ } Figure 1. Control-flow dominated description The rest of nodes (ovals) correspond to operations (i/o, computation, data multiplexing and state storage with either register or transparent latch semantics) that compute/assign values to variables. I/O node names are prefixed by '!'. Edges represent control and data dependencies. Control dependencies (most of them omitted in figure 2 for readability reasons) are from guard nodes to the CDG nodes labelled by them and are represented by dashed arrows. Solid arrows represent data (computation) dependencies. The HCDG obeys the principle of static single assignment. Nodes may have more than one definition only under mutually exclusive conditions (e.g. !u). In table 1 the guard definitions for the example are given. <table> <thead> <tr> <th>Guard</th> <th>Boolean Definition</th> <th>Guard</th> <th>Boolean Definition</th> </tr> </thead> <tbody> <tr> <td>H_1</td> <td>x</td> <td>H_6</td> <td>y * T_1</td> </tr> <tr> <td>H_2</td> <td>y</td> <td>H_7</td> <td>y * T_1</td> </tr> <tr> <td>H_3</td> <td>!y</td> <td>H_8</td> <td>y * T_1 + y * T_4</td> </tr> <tr> <td>H_4</td> <td>!y + y * T_4</td> <td>H_9</td> <td>y * T_4 + x</td> </tr> <tr> <td>H_5</td> <td>!y * T_4 + y * T_7</td> <td>H_10</td> <td>y * T_1 *</td> </tr> </tbody> </table> Table 1. Guard Definitions 2.1. Formal semantics and the guard hierarchy Initially the HCDG was developed as internal representation of systems described in the SIGNAL synchronous formal specification language, used for the specification of reactive, real-time systems. The interested reader is referred to [25] for more details. Being so it dispose of a formal calculus that allows for the compile time proof of correctness properties as well as the definition of correctness preserving graph transformations useful in optimizing the synthesis results [24]. In a discrete time model where time is considered as an infinite sequence of logical instants, a guard is the set of logical instants that the boolean condition defining it, evaluates to true. The theoretical foundations of the HCDG consider guards as sets and guard formulas as application of set operations on these sets. In [21] it is shown how an equivalent representation of guard formulas as boolean functions can be obtained and vice-versa. Guards are equivalence classes of the HCDG nodes grouping together nodes labeled by the same guard, thus active at the same logical instants. The guard nodes of a HCDG are organized in a Guard Hierarchy (GH) which is a hierarchical tree-like, representation of the design control (figure 2, bottom). The GH represents the inclusion relation between guards. Inclusion relation. Lets denote by h; the boolean function corresponding to guard H; h; evaluates to true when- ever $H_i$ is present otherwise to false. The inclusion relation represented by the tree like structure of GH simply states that: $\forall (H_1 \in \text{descendants}(H_1)) \Rightarrow H_j \subseteq H_i$. Using the boolean definitions the inclusion relation between two guards will be denoted as: $H_2 \subseteq H_1 \Rightarrow h_2 \leq h_1$. In addition, inclusion can be extended to the following cases: $H_k = H_j \cup H_i \Rightarrow H_i \subseteq H_k$, $H_i \subseteq H_k$ $H_k = H_j \cap H_i \Rightarrow H_j \subseteq H_k$, $H_j \subseteq H_k$ In [21], the guard hierarchy is implemented as a hierarchy of BDD’s. Control representations based on BDD’s have already been used in previous work ([15], [4], [5]). The originality of the GH lies on the hierarchy construction and not at the use of BDD’s which are simply used for their efficiency. Using BDD’s two things can be efficiently achieved. First, equivalence between guard formulas can be easily established to avoid redundancy. Second, during hierarchization, it is easy to find the maximum depth in the tree that a guard node can be inserted, by means of a special factorization algorithm (see [21] for details). This yields an optimally refined inclusion hierarchy. The some of the advantages of using the inclusion hierarchy information will be shown later on. Briefly, it permits to minimize the number of mutex tests [19] in guard exclusiveness detection used for conditional resource sharing especially useful in interactive design environments where speed is important. The hierarchy also enables the development of probabilistic priority functions used in HCDG based list scheduling that efficiently account for conditional behavior [24]. Finally, in [20] it is shown that guard inclusion information is very important in order to triangularize a larger number of systems of guard equations than it would be possible by using a rewriting system based only on the axioms of boolean algebra. ### 2.2. Efficient static mutual exclusiveness detection Mutual guard exclusiveness will be noted by $\otimes$. Since in the formal foundations of the HCDG guards are sets of logical instants, two guards are mutually exclusive if their intersection is empty: $(H_1 \cap H_2 = \emptyset) \Leftrightarrow H_1 \otimes H_2$. In terms of the guard boolean function representations the above translates to: $h_1 \cdot h_2 = false \Leftrightarrow H_1 \otimes H_2$, which is the mutex test of [15]. Guard inclusion, as shown in [19], permits to minimize the number of mutual exclusion tests significantly. This optimization relies on the following proposition: Let $\text{subhier}(H) = \text{descendants}(H) + \{H\}$ then: $H_1 \otimes H_2 \Rightarrow \forall((H_i,H_j)) \in \text{subhier}(H_1) \times \text{subhier}(H_2), H_i \otimes H_j$ meaning that if two guards $H_1, H_2$ are mutually exclusive then every guard in the sub-hierarchy of $H_1$ is mutually exclusive to every guard in the sub-hierarchy of $H_2$. A set of benchmarks was used for the experimental evaluation of the mutual exclusiveness identification capabilities of the proposed approach compared to the methods of [26], [8], which are the most powerful methods so far in terms of coverage and insensitivity to syntactic variance. The benchmark from [19], was included to test the capabilities of our approach to reason on conditions defined by simple arithmetic relations [23]. Two semantically equivalent but syntactically different descriptions for each benchmark were used (desc.1, desc.2). The first, has a maximal conditional nesting as opposed to second one where conditions are flattened and each assignment is in its own conditional block. The results in the table below show that our method has at least as much coverage as the other two methods for a smaller number of mutex tests. <table> <thead> <tr> <th>Benchmark</th> <th>Number of operations</th> <th>Total number of pairs</th> <th>Number of mutex pairs</th> <th>% coverage</th> <th>Number of mutex tests</th> </tr> </thead> <tbody> <tr> <td>Gupta&amp;Li</td> <td>desc1</td> <td>9</td> <td>36</td> <td>22</td> <td>100</td> </tr> <tr> <td></td> <td>desc2</td> <td></td> <td></td> <td>22</td> <td>100</td> </tr> <tr> <td>Gajski</td> <td>desc1</td> <td>6</td> <td>15</td> <td>7</td> <td>100</td> </tr> <tr> <td></td> <td>desc2</td> <td></td> <td></td> <td>7</td> <td>100</td> </tr> <tr> <td>Kim</td> <td>desc1</td> <td>24</td> <td>226</td> <td>120</td> <td>100</td> </tr> <tr> <td></td> <td>desc2</td> <td></td> <td></td> <td>120</td> <td>100</td> </tr> <tr> <td>Parker</td> <td>desc1</td> <td>16</td> <td>120</td> <td>35</td> <td>100</td> </tr> <tr> <td></td> <td>desc2</td> <td></td> <td></td> <td>35</td> <td>100</td> </tr> <tr> <td>test</td> <td>desc1</td> <td>8 + 3</td> <td>28 + 3</td> <td>18 + 0</td> <td>78</td> </tr> <tr> <td></td> <td>desc2</td> <td></td> <td></td> <td>18 + 0</td> <td>78</td> </tr> </tbody> </table> 2.3. Mutual exclusiveness representation Guard mutual exclusiveness is represented by a compatibility graph, MEG for Mutual Exclusiveness Graph, where vertices represent guards and edges the mutual exclusiveness relation between the guards connected by the edge. For the example the resulting MEG is shown in figure 3. Cliques in the MEG correspond to groups of pairwise mutually exclusive guards. Depending on the resource sharing context (FUs, registers, interconnects) each vertex has an associated list of specification objects being active under this guard and can be allocated to a resource of that type. For instance, during scheduling such a structure permits to easily find groups of mutually exclusive operations that may share the same functional unit of a specific type. In [22] it is argued that the best adapted algorithm to find such cliques is based on the initial-graph-partition algorithm presented in [13]. Other heuristics e.g. [14] are not as well adapted to satisfy our clique construction objectives since clique maximality is not always a good optimization criterion when scheduling is considered. ![Figure 3. MEG for the example](image) Amongst other applications HCDGs and guard exclusiveness have also been used to false path identification (see [23] for more details) useful in path-based scheduling heuristics as well as more accurate static timing analysis. 2.4. Optimization by HCDG transformations Constructing the HCDG reflects the way the design is described by the designer. Applying graph transformations semantically equivalent representations are produced. Using guard information transformations like dead code elimination, code motions, node duplication, path length reduction by dependency rearrangement, etc. can be easily performed. In our approach, transformations are of two types; pre- and post-scheduling. The objective of pre-scheduling transformations is to remove syntactic variance and bring the HCDG into a form that will eventually yield better scheduling results. Such transformations include, lazy execution guard transformation to increase conditional resource sharing possibilities, dependency rearrangement and node duplication at mutually exclusive guards to shorten path lengths. The term lazy execution is used to denote the situation when a node produces a value only as often as this value is used by other nodes. Computing the appropriate node guards for lazy execution may introduce additional guards in the guard hierarchy and some control paths may become longer. However the transformed graph contains more conditional resource sharing possibilities and in a scheduling scheme where conditional resource sharing is combined to speculative execution this lengthening of control paths can be effectively amortized. Finally, in certain cases where the result of a node is used at mutually exclusive guards the node can be duplicated at these guards without increasing hardware costs since the duplicated operation nodes are mutually exclusive and may share the same resource during scheduling. Post-scheduling transformations, incorporate scheduling information (i.e. conditional resource sharing and speculative execution) into the HCDG and so the transformed graph can be used in subsequent scheduling iterations or post-scheduling high-level synthesis activities (i.e. allocation/binding etc.). Comparing figure 2 to figure 4, in the HCDG of the example the initial node guards were modified to enforce lazy node execution (e.g. +1, +2, +3, < initially labelled by guard $H_1$). Also, the node $+3$, used under mutually exclusive conditions ($H_6 \otimes H_3$), was duplicated to shorten the control paths. The data merge node (triangle $u$) is introduced to enforce the single assignment principle for variable $u$ (in the behavioral description) which has multiple definitions ($u_1$, $u_2$, $u_3$) under mutually exclusive conditions, represented by guards $H_3$, $H_6$, $H_{10}$ respectively. ![Figure 4. HCDG after optimizing transformations](image) 3. HCDG based List Scheduling Heuristic In this section a modified list scheduling heuristic that takes advantage of the HCDG features, is described. One important advantage of list scheduling is that its quality depends on the choice of the priority function [1]. In [22] we exploit the guard hierarchy to define a probabilistic priority function that better accounts for the conditional nature of the design. This is combined to an intelligent scheduling policy that employs pre-scheduling optimizing transformations (lazy execution, node duplication), conditional resource sharing and speculative execution. This process has several advantages. The list scheduling priority criterion is satisfied for the greatest number of distinct execution instances (paths) simultaneously because the constructed cliques for conditional resource sharing contain always the highest priority node and the largest number of other higher priority nodes that can share a resource with it. In respect to [9] and [5], speculative execution is considered only after normally executing nodes have been scheduled. In this way the risk of lengthening execution paths by displacing normally executing operations in favor of speculatively executing ones, is avoided. Finally, conditional resource sharing is exploited during scheduling and not before and so lengthening of execution paths due to inappropriate conditional resource sharing (i.e. [2], [11]), is also avoided. 3.1. Experimental results The HCDG-based list scheduling heuristic is compared to other similar heuristics (Kim [2], CVLS [7], [3], PBS [6], Brewer [5], ADD-FDLS [9]) using benchmarks appearing in previous work (kim, waka, maha, jian from [2], [7], [12], [8]). For each benchmark the HCDG was constructed, the guard hierarchy was refined, the HCDG was transformed for lazy execution and guard mutual exclusiveness was established using the techniques described in [18]. Results are given in the tables 2 to 5, for various resource constraints (cmp:/+/- one cycle resources) and chaining length (cn: 1, means no chaining) in terms of “total / longest path / shortest path” numbers of states. 3.1. Experimental results The HCDG-based list scheduling heuristic is compared to other similar heuristics (Kim [2], CVLS [7], [3], PBS [6], Brewer [5], ADD-FDLS [9]) using benchmarks appearing in previous work (kim, waka, maha, jian from [2], [7], [12], [8]). For each benchmark the HCDG was constructed, the guard hierarchy was refined, the HCDG was transformed for lazy execution and guard mutual exclusiveness was established using the techniques described in [18]. Results are given in the tables 2 to 5, for various resource constraints (cmp:/+/- one cycle resources) and chaining length (cn: 1, means no chaining) in terms of “total / longest path / shortest path” numbers of states. ### Table 2. Results for the “maha” benchmark <table> <thead> <tr> <th>Resources</th> <th>Kim</th> <th>PBS</th> <th>Crit. path</th> <th>Brewer</th> <th>Ours</th> </tr> </thead> <tbody> <tr> <td>cmp: 0, +: 1, :- 1, cn: 1</td> <td>8/8/3</td> <td>-</td> <td>-</td> <td>-</td> <td>8/8/3</td> </tr> <tr> <td>cmp: 0, +: 1, :- 1, cn: 2</td> <td>6/6/2</td> <td>9/9/2</td> <td>9/9/-</td> <td>-</td> <td>9/9/4</td> </tr> <tr> <td>cmp: 0, +: 2, :- 3, cn: 1</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>4/4/2</td> </tr> <tr> <td>cmp: 0, +: 2, :- 3, cn: 3</td> <td>3/3/2</td> <td>-</td> <td>4/4/-</td> <td>-</td> <td>3/3/2</td> </tr> <tr> <td>cmp: 0, +: 2, :- 3, cn: 5</td> <td>-</td> <td>4/3/1</td> <td>-</td> <td>-</td> <td>3/3/2</td> </tr> </tbody> </table> ### Table 3. Results for the “waka” benchmark <table> <thead> <tr> <th>Resources</th> <th>CVLS</th> <th>Kim</th> <th>PBS</th> <th>Brewer</th> <th>Ours</th> </tr> </thead> <tbody> <tr> <td>cmp: 1, +: 1, :- 1, cn: 1</td> <td>7/7/5</td> <td>7/7/5</td> <td>-</td> <td>-</td> <td>7/7/4</td> </tr> <tr> <td>cmp: 1, +: 1, :- 1, cn: 2</td> <td>-</td> <td>7/7/3</td> <td>7/7/3</td> <td>-</td> <td>6/6/3</td> </tr> <tr> <td>cmp: 1, ALU: 2, +: 1, :- 1</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>6/6/3</td> </tr> <tr> <td>cmp: 1, ALU: 2, +: 1, :- 1</td> <td>-</td> <td>6/6/3</td> <td>6/6/3</td> <td>-</td> <td>6/6/3</td> </tr> </tbody> </table> ### Table 4. Results for the “kim” benchmark <table> <thead> <tr> <th>Resources</th> <th>Kim</th> <th>Brewer</th> <th>ADD</th> <th>Ours</th> </tr> </thead> <tbody> <tr> <td>cmp: 2, +: 2, :- 1, cn: 1</td> <td>6/6/6</td> <td>-</td> <td>6/6/6</td> <td>6/6/6</td> </tr> <tr> <td>cmp: 1, +: 2, :- 1, cn: 1</td> <td>-</td> <td>-</td> <td>-</td> <td>6/6/6</td> </tr> <tr> <td>cmp: 2, ALU: 2, +: 1, :- 1</td> <td>-</td> <td>-</td> <td>-</td> <td>6/6/6</td> </tr> </tbody> </table> ### Table 5. Results for the “jian” benchmark <table> <thead> <tr> <th>Resources</th> <th>Kim</th> <th>Brewer</th> <th>ADD</th> <th>Ours</th> </tr> </thead> <tbody> <tr> <td>cmp: 1, +: 1, :- 1, cn: 1</td> <td>4/4/3</td> <td>-</td> <td>4/4/3</td> <td>4/4/3</td> </tr> <tr> <td>cmp: 1, +: 2, :- 1, cn: 1</td> <td>-</td> <td>-</td> <td>-</td> <td>3/3/2</td> </tr> </tbody> </table> Finally, the insensitivity of the scheduling results to the effects of syntactic variance is shown in table 6. For each benchmark two semantically equivalent but syntactically different descriptions (descr.1, descr.2) are used. The first, has a maximal conditional nesting as opposed to second ### Table 6. Insensitivity to syntactic variance <table> <thead> <tr> <th>Bench.</th> <th>Waka</th> <th>Maha</th> <th>Kim</th> <th>Jian</th> </tr> </thead> <tbody> <tr> <td>Resources</td> <td>cmp: 1</td> <td>ALU: 1</td> <td>cmp: 0</td> <td>ALU: 1</td> </tr> </tbody> </table> 4. The CODESIS tool In order to validate our results in more realistic contexts and quantitatively evaluate the effectiveness of the HCDG and the HCDG-based heuristics the CODESIS interactive CAD tool has been developed. Currently the specification front-end is the SIGNAL formal specification language but in the future other standard descriptions languages (e.g. C, VHDL) will be supported. Translation of the HCDG into C and VHDL already exists and allows us to interface to existing implementation tools like software compilers and hardware synthesis (behavioral and RTL) tools. A graphical user interface permits to visualize the HCDG, interactively apply graph transformations, scheduling heuristics and visualize the obtained results. The HCDG is a powerful internal design representation with the ability to treat both data-flow and control-flow designs under the same framework. Techniques and heuristics developed for data-flow oriented designs can be readily adapted for the HCDG. In addition several others have been developed to tackle the problems related to control-flow intensive designs. The HCDG-based scheduling approach exploits most of the existing scheduling optimization techniques, enjoying their combined benefits. Both speculative execution and conditional resource sharing are combined in a uniform and consistent framework similarly to dynamic CV’s of [3] and guards in [4], [5]. Even more, it does not suffer from effects of syntactic variance at both the mutual exclusiveness detection and scheduling levels, as CDFG or CFG based approaches do. The hierarchical control representation permits to minimize the number of mutual exclusiveness tests and also develop probabilistic priority functions that account for the conditional nature of the design. Finally, to test our ideas in more realistic contexts a user friendly HLS tool has been built using the HCDG as its internal representation. References
{"Source-Url": "https://hal.archives-ouvertes.fr/file/index/docid/545528/filename/isss00.pdf", "len_cl100k_base": 6565, "olmocr-version": "0.1.50", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 23827, "total-output-tokens": 8017, "length": "2e12", "weborganizer": {"__label__adult": 0.0005178451538085938, "__label__art_design": 0.0008254051208496094, "__label__crime_law": 0.0005269050598144531, "__label__education_jobs": 0.0006422996520996094, "__label__entertainment": 0.0001392364501953125, "__label__fashion_beauty": 0.00026679039001464844, "__label__finance_business": 0.0003840923309326172, "__label__food_dining": 0.0004551410675048828, "__label__games": 0.0008149147033691406, "__label__hardware": 0.01007843017578125, "__label__health": 0.0006895065307617188, "__label__history": 0.0003986358642578125, "__label__home_hobbies": 0.00020492076873779297, "__label__industrial": 0.0016946792602539062, "__label__literature": 0.00026416778564453125, "__label__politics": 0.00048732757568359375, "__label__religion": 0.0008172988891601562, "__label__science_tech": 0.258056640625, "__label__social_life": 9.54270362854004e-05, "__label__software": 0.00867462158203125, "__label__software_dev": 0.71240234375, "__label__sports_fitness": 0.0003902912139892578, "__label__transportation": 0.001132965087890625, "__label__travel": 0.0002815723419189453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28959, 0.06067]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28959, 0.37469]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28959, 0.85042]], "google_gemma-3-12b-it_contains_pii": [[0, 1158, false], [1158, 5247, null], [5247, 8842, null], [8842, 14944, null], [14944, 19283, null], [19283, 24089, null], [24089, 28959, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1158, true], [1158, 5247, null], [5247, 8842, null], [8842, 14944, null], [14944, 19283, null], [19283, 24089, null], [24089, 28959, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28959, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28959, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28959, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28959, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28959, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28959, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28959, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28959, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28959, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28959, null]], "pdf_page_numbers": [[0, 1158, 1], [1158, 5247, 2], [5247, 8842, 3], [8842, 14944, 4], [14944, 19283, 5], [19283, 24089, 6], [24089, 28959, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28959, 0.26257]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
4ab9f41d647743dd2dfe4b61ef7fb114ad775da6
AN EFFICIENT ALGORITHM FOR GENERATING LINEAR TRANSFORMATIONS IN A SHUFFLE-EXCHANGE NETWORK by T. Etzion and A. Lempel Technical Report #315 May 1984 An Efficient Algorithm for Generating Linear Transformations in a Shuffle-Exchange Network Running Head: Transformations in a Shuffle-Exchange Network T. Etzion* and A. Lempel* ABSTRACT This paper presents an algorithm for generating all the permutations defined by linear transformations on a shuffle-exchange network of $2^n$ processors in $2n-1$ passes. The proposed algorithm generates any such permutation in $O(n \log^2 n)$ elementary steps. The subclass of bit-permutations is generated in $O(n)$ steps. Key words: Algorithm, complexity, linear transformations, permutations, shuffle-exchange network. Computer Science Department, Technion, Israel Institute of Technology, Haifa, ISRAEL. 1. INTRODUCTION The shuffle-exchange (SE) network is an efficient tool for implementing various types of parallel processes [2],[6]. The SE network is composed of \( N=2^n \) processors, where each processor is represented by a binary \( n \)-tuple \((x_1,x_2,\ldots,x_n)\). In the SHUFFLE-operation processor \((x_1,x_2,\ldots,x_n)\) transfers information to processor \((x_2,\ldots,x_n,x_1)\). In the EXCHANGE-operation processors \((x_1,x_2,\ldots,x_{n-1},0)\) and \((x_1,x_2,\ldots,x_{n-1},1)\) may exchange information, independent of other pairs of this form. One SHUFFLE followed by one EXCHANGE is called a pass. Between the SHUFFLE phase and the EXCHANGE phase of a pass there is a computational phase during which the active pairs of the upcoming EXCHANGE are determined. Prior to the first pass there is normally a preprocessing stage. The overall procedure consisting of the preprocessing stage and all the passes is often referred to as the routing algorithm. An important problem in this context is the design of efficient routing algorithms that implement permutations in a SE network in a minimal number of passes. In general, a transformation on a SE network associates with each processor a destination processor for the purpose of information transfer. This paper deals with the realization of nonsingular linear transformations, i.e., permutations for which each bit of the destination processor is a linear combination of the bits of the origin processor. It is well known[3],[4], that such permutations can be realized in \( 2n-1 \) passes, using a routing algorithm of \( O(n^2) \) steps. In Section 2 we show how to realize these permutations in \( 2n-1 \) passes using a routing algorithm of \( O(n \log^2 n) \) steps. In Section 3 we show that if the permutation is merely a bit permutation then only \( O(n) \) steps are required. Following Linial and Tarsi[3] we employ the combinatorial model described below. **Definition 1[3]:** A 0–1 matrix \( A \) of order \( N \times m, N=2^n, m \geq n \), is said to be balanced if all the rows in any \( n \) consecutive columns of \( A \) are distinct. Definition 2: The standard matrix is an $N \times n$ matrix $D$ whose $i$-th row is the base-2 representation of $i$, $0 \leq i \leq N-1$. In terms of these definitions our problem can be stated as follows [3] Given a balanced $N \times n$ matrix $A$ find a (possibly empty) matrix $X$ such that the matrix $[D \mid X \mid A]$ is balanced. 2. REALIZATION OF LINEAR TRANSFORMATIONS In this section we show how to realize linear transformations on a SE network in \(2n-1\) passes using a routing algorithm of \(O(n\log^2 n)\) steps. In the sequel all the arithmetic is over \(GF(2)\), all the vectors are column vectors with \(n\) elements, \(I= [I(1) \ I(2) \ \ldots \ I(n)]\) denotes the identity matrix of order \(n\), and \(T\) denotes a nonsingular matrix of order \(n\). **Proposition 1 [3]**: Let \(A\) be a matrix of order \(N \times n\). Then \(AT\) is balanced if and only if \(A\) is. **Definition 3**: A matrix \(R\) of order \(n \times m\), \(n \leq m\), is said to be \(n\)-regular if every \(n\) consecutive columns of \(R\) are linearly independent. It follows readily from Proposition 1 that if \(R\) is \(n\)-regular then \(DR\) is balanced. In what follows we consider \(n\)-regular matrices of the form \(R= [I; Y; T]\) and propose a method of finding a suitable matrix \(Y\) of \(n-1\) columns when given the matrix \(T\). Consider a \(n \times n\) binary matrix \(B= [B(1) B(2) \ldots B(n)]\), where each column \(B(k)\) has either one or two nonzero entries. \(B\) can be viewed as the incidence matrix of the (undirected) graph \(G(B)\) defined as follows: **Definition 4**: \(G(B)\) has \(n+1\) vertices \(0, 1, \ldots, n\) and \(n\) edges \(e(1), e(2), \ldots, e(n)\), where \(e(k)\) joins vertices \(i > 0\) and \(j > 0\) if \(B(k)\) has nonzero entries in rows \(i\) and \(j\), and \(e(k)\) joins vertices \(i > 0\) and \(0\) if \(B(k)\) is nonzero in row \(i\) only. **Lemma 1 [5]**: The vectors \(B(1), B(2), \ldots, B(n)\) are linearly independent if and only if \(G(B)\) is a tree. **Lemma 2**: Let \(B(1), B(2), \ldots, B(n)\) be linearly independent vectors. Then there exists an integer \(k\), \(1 \leq k \leq n\), and binary coefficients \(b_j\), \(1 \leq j \leq n-1\), such that \[ l(k) = B(n) + \sum_{j=1}^{n-1} b_j B(j) \] **Proof**: The matrix \(B= [B(1) B(2) \ldots B(n)]\) is nonsingular. Hence, there exists a matrix \(Q = [Q(1) Q(2) \ldots Q(n)]\) such that \(BQ = I\). Since \(Q\) is nonsingular there exists at least one \(k\) such that the last entry of \(Q(k)\) equals 1. Lemma 3: Let \( B(1), B(2), \ldots, B(n) \) be linearly independent vectors and let \( k, 1 \leq k \leq n \) be an integer satisfying Lemma 2. Then \( B(0), B(1), B(2), \ldots, B(n-1) \) are linearly independent, where \[ B(0) = \sum_{j=1}^{n-1} c_j B(j), \quad c_j \in \{0, 1\} \] (2) Proof: Assume the contrary, that \( B(0), B(1), B(2), \ldots, B(n-1) \) are linearly dependent. Then, since the last \( n-1 \) vectors are linearly independent, there exist \( d_j, 1 \leq j \leq n-1 \), such that \[ B(0) = \sum_{j=1}^{n-1} d_j B(j) \] (3) From (1), (2), and (3), we obtain \[ B(n) = \sum_{j=1}^{n-1} (b_j + c_j + d_j) B(j) \] which contradicts the linearly independence of the \( B(j), 1 \leq j \leq n \). Q.E.D. Based on Lemmas 1 and 3, we propose the following construction of \( Y = [Y(1) \ Y(2) \ldots \ Y(n-1)] \) such that \( [1 \ Y \ T] \) be \( n \)-regular for a given \( T = [T(1) \ T(2) \ldots \ T(n)] \). Construction 1: Let \( B_0 = T \) and let \( B_m = [Y(n-m) \ Y(n-1) \ T(1) \ldots \ T(n-m)] \), \( 1 \leq m \leq n-1 \). Given \( B_m, 0 \leq m \leq n-1 \), construct \( Y(n-m-1) \) as follows. (i) If \( k = n-m-1 \) satisfies Lemma 2, set \( Y(n-m-1) = I(n-m-1) \). (ii) If \( k = n-m-1 \) does not satisfy Lemma 2, find an integer \( q \) which does satisfy Lemma 2 and set \( Y(n-m-1) = I(n-m-1) + I(q) \). Lemma 4: The matrix \( [I(1) \ldots \ I(n) \ Y(1) \ldots \ Y(n-1) \ T(1) \ldots \ T(n)] \) obtained via Construction 1 is \( n \)-regular. Proof: The \( n \)-regularity of \( [Y(1) \ldots \ Y(n-1) \ T(1) \ldots \ T(n)] \) follows directly from Lemma 3. To complete the proof it suffices to show that \( [I(1) \ldots \ I(n) \ Y(1) \ldots \ Y(n-1)] \) is \( n \)-regular. Let \( C_1 = 1 \) and let We will show that linear independence among the columns of \( C_m \), \( 1 \leq m < n \), implies the same for \( C_m \). Clearly, the columns of \( C_1 \) are linearly independent. Suppose \( C_m \), \( m \geq 1 \), is nonsingular and consider \( C_{m+1} = [I(m+1) \cdots I(n) \ Y(1) \cdots Y(m)] \). By Construction 1, either \( Y(m) = I(m) \) or \( Y(m) = I(m) + I(q) \) for some \( q \neq m \). In the first case it is clear that \( C_{m+1} \) is nonsingular. In the latter case noting that \( C_r \), \( 1 \leq r < n \), has at most two nonzero entries in every column, we can view \( C_r \) as the incidence matrix of the graph \( G(C_r) \) according to Definition 4. By Lemma 1, since \( C_m \) is nonsingular \( G(C_m) \) is a tree. \( G(C_{m+1}) \) is obtained from \( G(C_m) \) by deleting the edge \((0,m)\) (corresponding to the column \( I(m) \)) and inserting the edge \((m,q)\) (corresponding to the column \( I(m) + I(q) \)). If \( G(C_{m+1}) \) contains a cycle then, since \( Y(1), \ldots, Y(m) \) are linearly independent, the cycle must include the vertex 0. Since \( G(C_m) \) is a tree, deleting the edge \((0,m)\) from \( G(C_m) \) leaves a graph with no path between the vertices 0 and \( m \). Hence, inserting the edge \((m,q)\) cannot generate a cycle that contains the vertex 0. Hence \( G(C_{m+1}) \) is a tree, and \( C_{m+1} \) is nonsingular. Q.E.D. In order to find an integer \( k \) that satisfies Lemma 2 we need an efficient algorithm to invert a matrix. To this end we use the algorithm proposed by Csanky\[1\] who showed how to invert a matrix of order \( n \), in \( O(\log^2 n) \) steps using a polynomial number of processors. Csanky's algorithm utilizes a model that has an arbitrary number of identical processors with independent control and an arbitrarily large shared memory with unrestricted access. In this model, each processor is capable of taking its operands from the shared memory, performing any one of the binary operations: \( +, -, \ast, / \) and storing the result in memory in one step. Based on Lemma 2, Csanky's algorithm, and Construction 1, we propose a procedure to realize linear transformations. In this procedure each processor has at each stage the following information: 1. An \((n-1)\)-tuple \( U = (u(1), \ldots, u(n'-1)) \), where \( u(j) = k \) if \( Y(j) = I(j) + I(k) \) and \( u(j) = 0 \) if \( Y(j) = I(j) \). (2) Two \(n\)-tuples, \(S\) and \(F=ST\), whose 'initial values' represent, respectively, the ID of the said processor and that of the destination processor as defined by the given linear transformation. In the SHUFFLE and the EXCHANGE operations that follow each processor transfers its current \(S\) and \(F\) and receives new values for \(S\) and \(F\). Procedure 1: Given the linear transformation defined by a nonsingular matrix \[ T = \begin{bmatrix} T(1) & T(2) & \cdots & T(n) \end{bmatrix} \] let \( B_0 = T \) and let \[ B_m = \begin{bmatrix} Y(n-m) & Y(n-1) & \cdots & Y(n-m) \end{bmatrix}, \] Having computed \( B_r = \begin{bmatrix} Y(n-r) & Y(n-1) & \cdots & Y(n-r) \end{bmatrix}, r \geq 0 \), apply the Canny algorithm to generate the inverse \( Q = \begin{bmatrix} Q(1) & Q(n) \end{bmatrix} \) of \( B_r \). If the last entry of \( Q(m-1) \) equals 1, set \( Y(n-r-i) = Y(n-r-1) \) and \( u(n-r-1) = 0 \); otherwise, find an integer \( k \) such that the last entry of \( Q(k) \) equals 1 and set \( Y(n-r-1) = Y(n-r-1) + f(k) \) and \( u(n-r-1) = k \). After \( u(1), \ldots, u(n-1) \) are generated, they are transferred to each of the \(N\) processors of the SE network. With reference to the \(n\)-tuples \( S = (s(1), \ldots, s(n)) \) and \( F = (f(1), \ldots, f(n)) \), stored with each processor, perform the following. - \(s(0) = 0\); - for \(i = 1\) to \(n-1\) do - begin - SHUFFLE; - if \( s(u(i)) \neq 0 \) then EXCHANGE; - end; - SHUFFLE; - if \( s(n) \neq f(1) \) then EXCHANGE; - for \(i = 1\) to \(n-1\) do - begin - SHUFFLE; - if \( f(i+1) \neq s(i) + s(u(i)) \) then EXCHANGE; - end; Technion - Computer Science Department - Technical Report CS0315 - 1984 Note that SHUFFLE and EXCHANGE are executed in parallel by all the processors. **Theorem 1:** Procedure 1 realizes a linear transformation in $2n - 1$ passes using a routing algorithm of $O(n \log^2 n)$ steps. **Proof:** In order to show that Procedure 1 realizes the linear transformation associated with the matrix $T$, it suffices to show that it implements the moves implied by the balanced matrix $$DB = D[I]Y(1) \cdots Y(n-1)[T] = D[I]D'Y(1) \cdots D'Y(n-1)[DT]$$ That is, for a given processor $S = (s(1), \ldots, s(n))$ and its destination processor $F = (f(1), \ldots, f(n))$, the path in the SE network via which the transformation $F = ST$ is implemented by Procedure 1 is given by the sequence of processors corresponding to successive $n$-tuples from the row $$SB = s(1), \ldots, s(n), (s(1) + s(u(1))), (s(2) + s(u(2))), \ldots, (s(n-1) + s(u(n-1))), f(1), \ldots, f(n)$$ To this end, note that for each row $SB$, Procedure 1 performs an EXCHANGE if and only if the leading bit of the current processor differs from the last bit of the succeeding processor. The claimed complexity of Procedure 1 is obtained as follows: The $n$-regular matrix $B = [I]Y(1) \cdots Y(n-1)[T]$ is generated by $n-1$ applications of the Czanky algorithm. Therefore, this part consists of $O(n \log^2 n)$ steps. The $(n-1)$-tuple $U = (u(1), \ldots, u(n-1))$ is transferred to each of the $N$ processors of the SE network on a bus in $O(n)$ steps. The $2n - 1$ passes correspond to last $2n - 1$ columns of $DB$ and each pass is executed in constant time. Thus, the overall complexity of the procedure is $O(n \log^2 n)$. Q.E.D. 3. REALIZATION OF BIT-PERMUTATIONS In this section we show how to realize the linear transformation associated with a permutation matrix \( T \) in \( O(n) \) steps. **Definition 5:** \( T = [T(1) \ T(2) \ldots \ T(n)] \) is called a permutation matrix if \( T(j) = \mathbf{I}(\sigma(j)), \ j = 1, 2, \ldots n, \) where \( \sigma(1), \sigma(2), \ldots, \sigma(n) \) is an arbitrary permutation on the integers \( 1, 2, \ldots n. \) Based on Lemma 1, we propose the following construction of \( Y = [Y(1) \ Y(2) \ Y(n-1)] \) such that \( [I' \ Y \ T] \) is \( n \)-regular for a given permutation matrix \( T = [I(\sigma(1)) \ I(\sigma(2)) \ldots I(\sigma(n))] \). **Construction 2:** Let \( B_0 = T \) and let \( B_m = [Y(n-m) \ Y(n-1) I(\sigma(1)) \ldots I(\sigma(n-m))] \), \( 1 \leq m \leq n-1 \). Along with the columns of \( Y \) we construct a sequence of graphs \( G_0 \), \( 0 \leq m \leq n-1 \). \( G_0 \) is the edgeless graph of \( n \) isolated vertices \( 1, 2, \ldots n \), and given \( B_m \) and \( G_m \), \( 0 \leq m \leq n-1 \), construct \( Y(n-m-1) \) and \( G_{m+1} \), as follows. If the addition of edge \( (n-m-1, \sigma(n-m)) \) to \( G_m \) creates a cycle, set \( Y(n-m-1) = I(n-m-1) \) \quad and \quad \( G_{m+1} = G_m \), otherwise, set \( Y(n-m-1) = I(n-m-1) + I(\sigma(n-m)) \) and obtain \( G_{m+1} \) by adding the edge \( (n-m-1, \sigma(n-m)) \) to \( G_m \). **Lemma 5:** The matrix \( [I(1) \ldots I(n) \ Y(1) \ldots Y(n-1) I(\sigma(1)) \ldots I(\sigma(n))] \) obtained via construction 2 is \( n \)-regular. **Proof:** First, observe that every column of the matrix \( B_r \), \( 0 \leq r \leq n-1 \), has at most two nonzero entries and, thus, can be viewed as the incidence matrix of the graph \( G(B_r) \) defined in Section 2. Note that \( G_k \), as defined by Construction 2, can be obtained from \( G(B_r) \) by deleting from the latter the vertex 0 and all the edges incident with this vertex. Note further that \( G(B_{m+1}) \) is obtained from \( G(B_m) \) by: (i) Deletion of edge \( (0, \sigma(n-m)) \). (ii) Addition of either edge \( (0, n-m-1) \), or edge \( (\sigma(n-m), n-m-1) \). Assume that \( G(B_m), m \geq 0 \), is a tree. Then, operation (i) results in two pieces of \( G(B_m) \), with no path between vertices 0 and \( P(n-m) \). Hence, if at this stage, connecting vertex \( p(n-m) \) to vertex \( n-m-1 \) creates a cycle, it follows that operation (i) leaves vertex \( n-m-1 \) in the same piece with vertex \( p(n-m) \), namely with no path between vertex 0 and vertex \( n-m-1 \). Therefore, in this case, the graph \( G(B_{m+1}) \) obtained in operation (ii) by adding the edge \((0, n-m-1)\) is a tree. If, on the other hand, connecting vertex \( p(n-m) \) to vertex \( n-m-1 \), after operation (i), does not create a cycle in the piece containing vertex \( p(n-m) \), it certainly does not create a cycle with vertex 0 and the resulting graph is again a tree. Since \( G(B_0) \) is a tree it follows that \( G(B_m) \) is a tree for all \( 0 \leq m \leq n-1 \), which implies that the matrix \([Y(1) \cdots Y(n-1) I(p(1)) \cdots I(p(n))]\) is \( n \)-regular. The \( n \)-regularity of the matrix \([I(1) \cdots I(n) Y(1) \cdots Y(n-1)]\) follows in the same manner as in the proof of Lemma 4. Q.E.D. The reader can readily verify that the result of Construction 2 could be obtained via Construction 1 through an appropriate choice of the parameter \( k \) of Lemma 2. Construction 2 leads to Procedure 2, given below for realizing bit-permutations. In this procedure, which is simpler than Procedure 1, each processor has at each stage the following information. 1. An \((n-1)\)-tuple \( U=(u(1), \ldots, u(n-1)) \) as in Procedure 1 2. An \( n \)-tuple \( S \) as in Procedure 1 3. The permutation \( P=(p(1), \ldots, p(n)) \). Procedure 2: **Part 1** for \( i=1 \) to \( n-1 \) do begin \( u(i):=p(i+1); \) check \( (i) := \text{false} \). Technion - Computer Science Department - Technical Report CS0315 - 1984 \begin{verbatim} 11. check(n) = false; for i := 1 to n - 1 do begin cycle := false; current := i; while ((cycle = false) and (current < n) and (check(current) = false)) do begin check(current) := true; if u(current) ≠ i then current := u(current). else cycle := true; end. if cycle = true then u(i) := 0; end; Part 2 s(0) := 0; for i := 1 to n - 1 do begin SHUFFLE; if s(u(i)) ≠ 0 then EXCHANGE; end; SHUFFLE; if s(n) ≠ s(p(1)) then EXCHANGE; for i := 1 to n - 1 do begin SHUFFLE; if s(p(i + 1)) ≠ s(i) + s(u(i)) then EXCHANGE; end. Theorem 2: Procedure 2 realizes a bit-permutation in 2n - 1 passes and O(n) steps. \end{verbatim} Proof: In Part 1 of Procedure 2 each processor computes the \((n-1)\)-tuple \(U=(u(1),\ldots,u(n-1))\). Initially \(u(n-m-1)\) is set to \(p(n-m)\) which corresponds to setting \(Y(n-m-1)\) to \(f(n-m-1)+f(p(n-m))\). Then, \(u(n-m)\) is set to 0 if the insertion of the edge \((n-m-1, p(n-m))\) creates a cycle in the corresponding graph \(G_m\). Part 2 of Procedure 2 is identical to Procedure 1, with \(s(p(i))\) substituting for \(f(i)\). The claimed complexity of Procedure 2 is obtained as follows. Part 1 consists of \(O(n)\) steps since the variables \(\text{check}(i), 1 \leq i \leq n\), insure that for each \(i\), the variable \(\text{current}\) takes the value \(i\) at most once in the while loop. As in Procedure 1, each of the \(2n-1\) passes is executed in constant time. Thus, the overall complexity of the procedure is \(O(n)\). Q.E.D. ACKNOWLEDGMENT The authors wish to thank Yossi Shiloach for presenting the problem and for many helpful discussions. REFERENCES
{"Source-Url": "http://www.cs.technion.ac.il/users/wwwb/cgi-bin/tr-get.cgi/1984/CS/CS0315.pdf", "len_cl100k_base": 6146, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 37679, "total-output-tokens": 7370, "length": "2e12", "weborganizer": {"__label__adult": 0.00038695335388183594, "__label__art_design": 0.0004353523254394531, "__label__crime_law": 0.0004830360412597656, "__label__education_jobs": 0.0008382797241210938, "__label__entertainment": 0.00013387203216552734, "__label__fashion_beauty": 0.0001883506774902344, "__label__finance_business": 0.0004949569702148438, "__label__food_dining": 0.0004711151123046875, "__label__games": 0.0009889602661132812, "__label__hardware": 0.0085906982421875, "__label__health": 0.0007791519165039062, "__label__history": 0.0003893375396728515, "__label__home_hobbies": 0.0001862049102783203, "__label__industrial": 0.0014743804931640625, "__label__literature": 0.0002665519714355469, "__label__politics": 0.0003757476806640625, "__label__religion": 0.0007328987121582031, "__label__science_tech": 0.4423828125, "__label__social_life": 8.040666580200195e-05, "__label__software": 0.01338958740234375, "__label__software_dev": 0.525390625, "__label__sports_fitness": 0.0003261566162109375, "__label__transportation": 0.0008921623229980469, "__label__travel": 0.00023627281188964844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 19391, 0.03791]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 19391, 0.65881]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 19391, 0.81556]], "google_gemma-3-12b-it_contains_pii": [[0, 153, false], [153, 854, null], [854, 2984, null], [2984, 3326, null], [3326, 5523, null], [5523, 7270, null], [7270, 9662, null], [9662, 11387, null], [11387, 13016, null], [13016, 15349, null], [15349, 17031, null], [17031, 17699, null], [17699, 18673, null], [18673, 19391, null]], "google_gemma-3-12b-it_is_public_document": [[0, 153, true], [153, 854, null], [854, 2984, null], [2984, 3326, null], [3326, 5523, null], [5523, 7270, null], [7270, 9662, null], [9662, 11387, null], [11387, 13016, null], [13016, 15349, null], [15349, 17031, null], [17031, 17699, null], [17699, 18673, null], [18673, 19391, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 19391, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 19391, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 19391, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 19391, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 19391, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 19391, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 19391, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 19391, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 19391, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 19391, null]], "pdf_page_numbers": [[0, 153, 1], [153, 854, 2], [854, 2984, 3], [2984, 3326, 4], [3326, 5523, 5], [5523, 7270, 6], [7270, 9662, 7], [9662, 11387, 8], [11387, 13016, 9], [13016, 15349, 10], [15349, 17031, 11], [17031, 17699, 12], [17699, 18673, 13], [18673, 19391, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 19391, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
6a2ddb3ca8c412266444672ca56a42127a20c7dc
The Gadfly: An Approach to Architectural-Level System Comprehension Paul Clements, Robert Krut, Ed Morris, Kurt Wallnau {clements, rk, ejm, kcw}@sei.cmu.edu Software Engineering Institute Carnegie Mellon University Pittsburgh, PA 15213-3890 Abstract Technology to support system comprehension tends to reflect either a “bottom-up” or “top-down” approach. Bottom-up approaches attempt to derive system models from source code, while top-down approaches attempt to map abstract “domain” concepts to concrete system artifacts. While both approaches have merit in theory, in practice the top-down approach has not yielded scalable, cost-effective technology. One problem with the top-down approach is that it is very expensive to develop domain models, and it is difficult to develop models that are sufficiently general to be applied to multiple systems (and hence amortize the development cost). This paper describes the Gadfly, an approach for developing narrowly-focused, reusable domain models that can be integrated and (re)used to aid in the process of top-down system comprehension. 1. Introduction A primary purpose of program understanding technology is, ultimately, to assist maintainers to develop a system-level understanding of an application so that changes to the application can be introduced in a rational, consistent way. Unfortunately, although source code is often the most reliable arbiter of what a system does, it does not reflect all of the attributes of an application necessary to develop a true system-level understanding: there is more to understanding a system than understanding what function it computes. System characteristics such as performance, robustness, security, etc., must also be understood. We refer to such characteristics as quality attributes, and quality attributes are related more to the architecture of a system than to its code [16]. This paper describes a knowledge-based software assistant called the Gadfly. The Gadfly is intended to help designers create applications that attain a selected set of quality attributes1, as well as to help maintainers understand how an existing application has achieved those properties. The construction and understanding guidance of the Gadfly is at the architectural level, which deals with allocation of functionality to components and inter-component interaction, rather than the internal workings of individual components. To motivate the architectural approach to system comprehension based on quality attributes, the paper makes the following points: • Software architecture provides a level of understanding at which a system’s quality attributes can be best managed and understood, because they most often depend on inter-component relationships and cannot be discovered from source code alone. • Quality attributes represent coherent domains of specialized design knowledge that can be separately modeled and combined in different ways to support both forward engineering and system comprehension. • Quality attribute knowledge is similar to the knowledge represented by specialized design schemas and similar concepts found in program comprehension literature, and is amenable to knowledge representation modeling. • Software architecture can be used both as a framework for integrating sets of quality attribute domain models, and as a juncture between top-down and bottom-up strategies for program comprehension. The rest of the paper is structured as follows: Section 2 provides an overview of current approaches to program and system comprehension, and describes the program understanding context for the Gadfly. Section 3 surveys the key concepts of software architecture, and outlines the potential role software architecture can play in both forward-engineering and system comprehension activities. Section 4 describes the Gadfly, and illustrates its use through an operational scenario. Section 5 summarizes the key contributions of the Gadfly to program comprehension, and outlines potential next steps. 1. The Gadfly prototype addresses only the information security attribute. 2. Program comprehension technology 2.1 Top-down and bottom-up approaches for system comprehension Current program understanding models identify a number of different types of knowledge that a maintainer uses to comprehend software, including knowledge of programming, knowledge of the real-world situation represented in the software, and knowledge of the application domain. All of these types of knowledge are important to the maintainer, since they embody different abstractions and impart different kinds of understanding of software systems. These kinds of knowledge are used in a top-down or bottom-up manner, or some opportunistic combination of these approaches. The bottom-up model of program comprehension suggests that a model of the application is built starting with program knowledge and works to produce higher abstractions, using strategies like “chunking”. Program knowledge[1] reflects the maintainers’ understanding of programming idioms, program structure, algorithms, and flow of control and data (the programming domain), and is typically related to a bottom-up approach. Alternately, a top-down model of program comprehension suggests that comprehension proceeds from higher level abstractions down to lower level program idioms, algorithms, etc. Pennington [1] suggests that, in addition to program knowledge, expert maintainers also rely on an understanding of the real-world problem addressed by the software in order to comprehend a particular software system. This world knowledge is referred to as a situation model, which describes the problem domain from a higher level of abstraction than program models. According to Pennington, program comprehension involves employing both bottom-up and top-down strategies to relate and coordinate information from the program model with that of the situation model[2]. Brooks suggests that, in order to comprehend a programming problem, experts employ a top-down, hypothesis driven problem solving approach[3]. In applying this approach, Soloway and Ehrlich found that expert programmers employ high-level schemas (plans) that strongly influence expectations about what a program should look like[5]. Koeneemann and Robertson demonstrated that for experienced programmers, program comprehension occurred primarily in a top-down manner using such schemas; however, programmers resort to bottom-up strategies when they lacked hypotheses, when hypotheses failed, or for close scrutiny of relevant code[4]. Subjects determined what program segments were relevant based on their knowledge of the task domain, general programming knowledge, and their current understanding of the program. Letovsky suggests that program comprehension can best be viewed as an opportunistic application of bottom-up and top-down strategies[7]. Guindon, Curtis, and Krasner also addressed the question of opportunistic system comprehension, but from the standpoint of the design of highly complex systems[6]. In experiments requiring the design of logic to control the functioning of lifts (elevators), the authors found that the primary determinant of performance was the presence (or lack) of computational techniques, called specialized design schemas, that correspond to characteristics of the application domain. These specialized design schemas encode a solution template and the situations under which the solution is appropriate. Examples of specialized design schemas employed by the experiment subjects included scheduling and routing, message communication, and concurrency. Guindon, et al., found that subjects applied these schemas in a highly opportunistic manner, building partial solutions at various levels of abstractions. Clearly, program comprehension relies on the application of a set of domain models (variously called specialized design schemas, situation models, program models, etc.). We expect that program comprehension relies on processing analogous to those suggested by Guindon for design. 2.2 Approaches to building domain models Current tool support for the application of domain knowledge to aid in program comprehension suffers from several limitations. While a number of approaches to codifying domain knowledge have been developed (e.g.,[9][10]), to date they have demonstrated only limited success. Nor are tools based on source code parsing sophisticated enough to produce domain models or recognize architectural designs within systems. The existing approaches to supporting the maintainer by supplementing their domain knowledge can be classified into two broad categories: - approaches that attempt to automatically extract the high-level domain knowledge from source code and other system artifacts; and, - approaches that attempt to codify and organize the knowledge of experts about specific systems. The former approach (automatic extraction of high-level abstractions from source code) has proven difficult. For example, automatic recognition of algorithms is complex due to the wide variance in the manner in which a specific algorithm can be encoded, and the huge volume of code in which the algorithm may be embedded. In addition, this approach still requires an expert maintainer to relate any algorithms found to domain concepts. The latter approach (building knowledge bases from expert input) has led to a number of interesting tools that provide some support for software maintainers. However, the effort necessary to create the knowledge base is extremely high, relying on time-consuming interviews involving system experts and often “knowledge engineers” who specialize in the organization of knowledge into appropriate rules. In addition to being expensive to develop, the completed knowledge bases are inflexible and hard to maintain [11]. They mix information that spans multiple views or abstractions in a system (e.g., algorithms, architecture, requirements), domain knowledge that crosses multiple software domains (e.g., security, fault-tolerance, distribution, performance), knowledge unique to the application domain (e.g., banking, health informatics, command and control), and knowledge specific to a single system (e.g., a specific air-traffic control system). It is hard to see how information within the resulting knowledge bases can be generalized to other systems within (or outside of) the application domain. Thus, their heavy development cost cannot be amortized across other applications. 2.3 A new approach for knowledge-based system comprehension In this paper, we suggest a new approach to developing domain knowledge to support system understanding, and describe a prototype implementation of this approach, called the Gadfly. The Gadfly is based on these premises: • There is a strong symmetry, largely unexploited to date, between developing a system and comprehending it after the fact. Comprehension seeks to understand the artifacts produced during construction. Hence, the knowledge structures that served to guide the construction tend to be the same ones that provide the framework against which the legacy artifacts can be understood. • Systems are comprehended, at least in part, from the vantage of codifiable domains of knowledge. The Gadfly recognizes that more than one kind of domain applies to a system. For example, to build a secure command-and-control system requires knowledge about command-and-control systems as well as methods for achieving security in computer systems. These domains may be orthogonal in many ways; in any case, knowledge about them can be separately modeled and combined in different ways to reveal different aspects of a system under investigation. • Just as domain knowledge can be partitioned into different kinds of expertise (e.g., security, fault-tolerance, command-and-control), so, too, it can be partitioned and mapped to systems in terms of different views or kinds of understanding (e.g., code, architecture and problem statement views). Thus, system comprehension involves understanding a system, through various abstractions, in terms of different kinds of domain knowledge. • The architecture of system is an abstraction that is particularly fruitful as a basis for system comprehension, and the concepts of software architecture can provide a foundation for structuring the investigation of a system, and for integrating supportive domain knowledge. The Gadfly is a system that guides its user through an analysis of a system, based on separate knowledge bases dealing with the application domain and relevant system quality attributes. The prototype Gadfly was built to render analytical assistance with secure command-and-control systems; hence, it was armed with one knowledge base about kinds of command-and-control systems, and a second knowledge base about computer security. 3. Software architecture and comprehension Software architecture refers to a view of a system that focuses on the nature and interactions of the major components. While not a new concept—the fundamental notion dates back at least to 1968 when Dijkstra pointed out that carefully structuring a system imparts useful properties and should be considered in addition to just computing the right answer [13]—software architecture as a topic of study is enjoying a flurry of interest. See, for example, [14]. A software architecture represents the integration of application domain concepts with system design expertise to ensure that the application will meet (or, in the case of program understanding, how it has met) its requirements. System design expertise is used to make (or understand) design trade-offs, e.g., performance vs. modifiability or security vs. ease of use. These and other quality attributes are manifested at the architectural level of systems, and cannot be discerned or analyzed from individual system components. More generally, an architecture represents a body of knowledge with multiple uses for both the designer and maintainer: • Architecture enables communication and can be used to convey the decisions of designers to maintainers. • Architecture represents a transferable abstraction of a system that can be applied to other systems exhibiting similar requirements. Domain-specific software architectures describe the features of a family of systems [15]. • Architecture suggests a recipe book for designers and maintainers to assist them in selecting and identifying the design idioms that guide the organization of modules and subsystems into complete systems. • Architecture simplifies system construction and guides program understanding by acting as a framework that constrains the manner in which components interact with their environment, receive and relinquish control, manage data, communicate, and share resources. - Architecture enables a system to satisfy its quality attributes. For example, modifiability depends extensively on the system’s modularization, which reflects the encapsulation strategies; performance depends largely upon the volume and complexity of inter-component communication and coordination, etc. - Architectural constructs are institutionalized in the development and maintenance organization’s team structure, work assignments, management units, etc. Therefore, crucial information about the social context of a system, vital for understanding, is embodied in its architecture. Most research in software architecture has tended to focus on forward engineering. Architecture description languages (ADLs) continue to be an active area of research [12]. The key challenge for ADLs is to express the unchanging characteristics of a system in addition to describing allowable variation. Closely related to work on ADLs is research on automated composition of systems from architectural models [17][18]. Architecture-level composers tend to view system building as an exercise in constraining the variability of an underlying design until no variation remains and the result is an executable system. In contrast, relatively little research has been undertaken to understand how software architecture can be used to aid in system comprehension. The software architecture analysis method (SAAM) [16] is an architecture comprehension technique that designers can use to validate that design decisions support selected quality attributes. SAAM is essentially a guide to architecture-level comprehension if quality attributes. However, SAAM is focused mostly on the methodology for comprehension; there is less emphasis on codifying design heuristics associated with any particular quality attribute. The Gadfly draws upon advances in both areas of software architecture research (forward engineering and comprehension) by recognizing that the kinds of knowledge needed to compose a system are, by and large, the same kinds of knowledge needed to comprehend an existing system. The kinds of analysis a designer subjects a hypothetical design are similar to the analysis of operational (fielded) designs, whether the intention is to perform a design trade-off for a particular quality attribute (forward engineering), or to discover the presence of a quality attribute (system comprehension). 4. The Gadfly The Gadfly prototype is a knowledge-based software assistant (KBSA)\(^2\) that supports the development and comprehension of command, control and communications (C3) systems. These functions are supported in this way: - Development: portions of command centers\(^3\) can be semi-automatically composed from components and a generic command center architecture. - Comprehension: specific command center designs can be evaluated from an information security perspective. - Integrated composition and comprehension: comprehension services may be invoked from composition services to provide guidance in the composition process. We first describe the knowledge and computational models used by the composition function of Gadfly, since these models are used (though extended) by the comprehension function. We then describe the overall Gadfly architecture and how the composition and comprehension functions interact. Finally, we annotate a sample session using the Gadfly for system comprehension purposes. 4.1 The Gadfly computational model The composition function of the Gadfly is built upon a domain model—a model which describes, in this case, the structure and operational context of command centers. The command center domain model is represented in RLF [20], which employs a structured-inheritance network (similar to Brachman’s KL-ONE [21]) and a specialized forward-chaining rule-based inferencing system. The domain model includes descriptions of command center tasks (e.g., situation monitoring and threat assessment), links between these tasks and architectural components in a command center (e.g., geographic information system) and links from architectural components to specific technologies (e.g., DeLorme mapping system). The composer allows command center designers to interactively develop portions of command centers through a refinement process: navigating among, and converging decision points in a domain model. These decision points represent various alternatives in the design and implemen- --- \(^2\) In general, a KBSA is an application that uses deductive reasoning to provide expert assistance to humans engaged in knowledge-intensive activities. \(^3\) A specific (headquarter) function within a C3 system. tation of a family of command centers described by this domain model. The composition process is strongly analogous to various hardware composition systems developed in the 1980’s [19]. To illustrate the knowledge and computational models of the composer, consider the simplified fragment of the command center domain model illustrated in Figure 1. This fragment represents a small portion of the generic architecture encoded within the domain model. It asserts that the message processing component of the architecture has exactly one inter-process communication (IPC) system and zero or more components for injecting test messages into the IPC subsystem. Further, there are exactly two kinds of injectors: one injects ASCII-encoded messages, one injects binary-encoded messages. The composer works by navigating through such network models, asking questions pertinent to the current “focus” (the semantic network concept it is examining) of the composer, and acting upon these answers. At the point when the focus of the composer is at the message processing system, for example, the designer might be asked whether a test message injector is desired, and, if so, how many and of what kind. Similar questions might be asked about the IPC subsystem, for example if the model described specific products that could provide this functionality. As the designer answers questions, the composer emits an instantiation of the generic model (a refinement) to record the decisions made by the designer and any consequences of these decisions; in some cases it can also emit build scripts for automatically constructing prototype systems. We refer to the semantic network (as illustrated in Figure 1) as encoding structural knowledge. Extra-structural knowledge is also encoded in the model as different types of rules that are linked to the structural model. Rules are used to capture domain knowledge not easily encoded in a semantic network, and are used to propagate design decisions through the network. For example, the decision to select a binary injector might be made automatically if an earlier design decision determined that the class of messages processed by the command center included binary messages; this, in turn, could have been deduced (and propagated) from a still-earlier decision regarding the mission of the command center (also modeled in the domain model). We developed a proof-of-concept composer based upon the model just described. However, we discovered that application domain knowledge alone was an insufficient foundation for the composer. While the domain model described alternative components and compositions, it provided little engineering guidance on how to select among these alternatives. Frequently, such decisions could be made on the basis of desired quality attributes. To help designers make such decisions, knowledge about these quality attributes and how they can be achieved by different design decisions must also be consulted. The Gadfly prototype is an extension of the original composer that augments the application-specific domain model (C3) with quality attribute domain knowledge. ### 4.2 The Gadfly architecture The initial customer for the Gadfly was concerned with evaluating systems (proposed and existing) from an information security perspective. They had already developed a domain model of information security principles to aid in analysis and evaluation efforts, but found the model difficult to employ because it lacked an application-specific context. This problem was the complement to limitations of the composer prototype, which had an application context but lacked quality attribute models. The purpose of the Gadfly prototype was to demonstrate the re-use of security expertise for designing new systems, and for evaluating existing systems, from a security perspective. The Gadfly architecture reflects the integration of application-domain knowledge with different kinds of highly-specialized design knowledge; it also reflects our contention that there is a symmetry between system design and system comprehension, and that a single technology can accommodate both kinds of activities. Similarly to the C3 domain model, the information security domain model was encoded in a structural model augmented with extra-structural rules. The structural model encodes information such as: - a threat model, which describes a range of potential security threats that confront systems, e.g., disruption, deception and disclosure; each threat is the root of its own taxonomy (e.g. there are many kinds of disruption); - a security service model, which describes basic classes of countermeasures for meeting various threats, e.g., hardware redundancy, cryptographic checksum, password protection; and, - a security mechanism model, which describes and links various “approved” mechanisms that may be useful for implementing all or part of one or more security services. Extra-structural rules encode procedural knowledge, referred to as strategies in [8], for applying this knowledge in specific contexts. These strategies include the kinds of information that security analysts will seek regarding the operational and maintenance context of a system, as well as concrete analysis processes, such as mathematical models for deriving the seriousness of a threat (for example, bal- ancing factors such as the potential gain for the intruder, the damage incurred by the system, the risk of detection to the intruder, and the cost of detecting the intruder). The Gadfly architecture is illustrated in Figure 2. The elicitor (the top-most box in Figure 2) is the function that manages the dialogue between the Gadfly and the designer. To conduct this dialogue the elicitor needs to modulate between C3 domain knowledge and information security domain knowledge. The elicitor “walks” the structural models, asking questions depending upon rules and facts associated with various concepts in the structural model, emitting instantiations of concepts where appropriate, and shifting focus to new nodes in the structural model. Links between concepts instantiated from the C3 domain model and information security domain model represent the assignment of security concepts to the application architec- ture5. The process continues in cyclic fashion (fire the rules, ask questions, shift focus in the network) until the session is complete (no more nodes to visit or questions to ask). The Gadfly can be used for system composition, in which case security knowledge can be consulted as a means of determining, for example, which components to select for a given system. Alternatively, Gadfly can be used when for constructing a cognitive model of security con- ccepts within an existing system. Moreover, it is not even \[ \text{strictly necessary that the system description be encoded in a domain model to use the Gadfly in this way. If an archi- tectural model did not exist for a system, the execution of the} \] \[ \text{Gadfly would result not in an assignment of security} \] \[ \text{concepts to an architectural description, but rather in a} \] \[ \text{framework for investigating the system from a security per- }\] \[ \text{spective. That is, the questions asked by the elicitor and the} \] \[ \text{instantiated security model generated from the dialogue} \] \[ \text{would provide a basis for further investigation of the sys- }\] \[ \text{tem using whatever system artifacts are available (code,} \] \[ \text{design notes, or the developers themselves). In effect, then,} \] \[ \text{the Gadfly helps maintainers by allowing them to re-use} \] \[ \text{highly specialized system comprehension strategies [8].} \] 4.3 Annotated output from a Gadfly session The ten-page report that forms the basis of the following annotations was generated from a session in which an ana- ylist was using the Gadfly to investigate the security prop- eties of the message processing component of the command center architecture.6 There are six sections of the Gadfly-generated report (not counting a prologue which provides context information on the report itself), each illustrated in turn in Figures 3a through 3f.7 The message processing component is itself an aggre- gate concept comprised of several kinds of components, including: message translators and validators, interprocess communication components, message generation compo- cents, human-machine interface components, etc. In the following scenario, specific off-the-shelf components that implement these functions had already been selected. Thus, the scenario reflects a comprehension task: the analyst is attempting to infer security properties of a design where several key decisions have already been made. Figure 3a reflects a security prioritization scheme for the particular system under investigation. This information represents requirements and design assumptions for the command center: comprehension of more detailed security properties (and the relationship of these properties to other aspects of the command center design) is not possible with- out this kind of information. This is an important feature of the Gadfly: it addresses information that is best specified (or found) in architec- tural-level specifications, i.e., issues of system and compo- nent context and inter-component relationships. Since this information is not likely to be found in code, this aspect of Gadfly reflects the reuse of a system comprehension stra- gy: the application of the strategy produces a framework \[ 5. \text{The prototype did not go so far as to create these links, since the op- }\] \[ \text{erational concept of the composer was that instantiated networks were} \] \[ \text{transient, and existed only so long as needed by the harvester. The links} \] \[ \text{exist conceptually, and are illustrated in the annotated report generated by} \] \[ \text{Gadfly (Section 4.3 of this paper). The generalization noted here is easily} \] \[ \text{achieved, however.} \] \[ 6. \text{The report corresponds to “security recommendations” depicted in} \] \[ \text{Figure 2.} \] \[ 7. \text{The content of the report has been edited slightly for formatting pur- }\] \[ \text{poses.} \] for investigating security properties of the command center in question. Figure 3b summarizes the specific threats to which this command center must respond. As was stated about the threat context and prioritization information depicted in Figure 3a, this information reflects design context; however (as will be illustrated) this information provides a basis for concept assignment of specific security threats to specific components in the command center architecture. Figure 3c summarizes aspects of a command center that might be associated with system-level documentation, but seldom with software-level documentation: the physical environment in which the software will execute. This information, too, is crucial for comprehending the security aspects of the software. Armed with this context information (Figures 3a-c), the Gadfly can proceed with the task of assigning security concepts to elements of the command center. Further, the Gadfly can infer new threats not explicitly specified by the analyst. Figure 3d is an excerpt of the security concepts directly assigned to command center components. Figure 3e is an excerpt of the threats inferred from the system context. These inferences result from sometimes subtle interactions between environmental context, threat priority and component attributes. These inferred threats are then assigned to the appropriate components. Figure 3a: Threat Context and Prioritization You specified the following sets of threat consequences as being the most important to counter: * disruption via incapacitation * disruption via corruption * disruption via falsification * disclosure via interception * disclosure via exposure **Figure 3b: Known Threats** Specific threats most concerned about: * disclosure: intrusion penetration * disclosure: interception scavenging * deception: falsification insertion * deception: falsification substitution * disruption: corruption tamper malicious * disclosure: intrusion cryptoanalysis **Figure 3c: Physical System Context** You specified the component would operate in the following environment/context: The factor: has the attribute(s): component_info: source code available to: nobody outside net connection: satellite physical site: network components in: unsecure area component housed in: secure area spot checks by guards: not performed Finally, the Gadfly is able to derive a set of security services that should be present in a system if it is to meet the assigned threats. As in Figures 3d and 3e, the Gadfly is able to make a direct assignment of security concepts (services, in this case) to components: it is also able to infer the need for additional services. For brevity, only the former is illustrated in Figure 3f. As noted earlier, the information security domain model underlying the Gadfly also maps security services (Figure 3f) to approved security mechanisms (e.g., software components). As a result, the kinds of mechanisms needed in the architecture to achieve a specific set of quality attribute objectives (security in this case) have been identified; the identify of these mechanisms can be used as a basis for a more fine-grained pattern matching within the code (e.g., search for cryptographic or password services in code). **5. Conclusions** **5.1 Gadfly Contributions** The Gadfly is a knowledge-based assistant for helping designers create command centers, and for helping security --- 8. DEC_Message_Q, PRISM_MTV, etc., are the names of specific off-the-shelf components used to implement this instantiation of the command center architecture. analysts comprehend the security properties of existing (and perhaps evolving) command center systems. The Gadfly makes three separate but related contributions to program understanding: a focus on architecture-level specifications, a partitioning of domain models into separately-modeled and individually-selectable knowledge bases, and a demonstration of the symmetry between system design and system comprehension. Architecture is the appropriate locus for specifying and comprehending system-wide properties. Continuing with security as an example, a component that is susceptible to logic tampering may represent a vulnerability in one system, but if it is enclosed within a more secure component (in inter-component relationship) or within a secure operating environment (a system boundary relationship), then it will not be a vulnerability. Thus, the property of vulnerability needs to be assigned to a specification of the system at a level that spans individual components: namely, the architecture level. The second contribution of Gadfly—separable domain models—is as much an economic contribution as it is a technical one. The idea of developing separable, reusable domain models is not new—it is a founding principle of the Knowledge Sharing Initiative, which is developing techniques for creating “shareable ontologies” [22]. The economic and technical justifications for shareable ontologies are strong: cost amortization, community standards, evolutionary refinement of shared models, etc. While we are not suggesting that the information security model is a shareable ontology—it lacks some of the characteristics specified by [22] that would make it one—we do claim it plays the role of shareable ontology within the Gadfly system. That is, constraining Gadfly domain models (currently, information security and C3) in various ways makes it possible to develop domain models that are focused on, for example, comprehension strategies and concept assignment to architectural components (as opposed to lines of code). Thus, it is not hard to envision generalizations of the Gadfly that would allow designers to consult construction or comprehension strategies focused on fault-tolerance, distribution, real-time performance, or other quality attributes of systems. The development of specialized domain (comprehension strategy) models is more economically feasible than developing one-of-a-kind, system-specific mixed-content domain models that do not easily transfer to new applications. Finally, the Gadfly demonstrates that the same kinds of human expertise needed to design systems are also needed to comprehend systems. Although design requires a synthesis of many kinds of expertise, system comprehension can be (and in practice often is) narrowly focused to the search for specific kinds of properties. The Gadfly demonstrated how one technology framework could re-use knowledge for both constructive (forward-engineering) and de-constructive (reverse-engineering) activities. 5.2 Future Direction Although the Gadfly architecture admits the possibility of integrating arbitrarily many domain models to support construction and comprehension of systems, the current system requires that the elicitor have knowledge of the specific knowledge-bases being employed. Ideally, the elicitor would be able to independent of domain models. However, while it might be simple to implement this feature, it is equally important not to subject designers to “information overload.” Some way of pruning or focusing the dialogue will be important, and this will be more difficult to accomplish. Similarly, modeling and managing the interaction between domain models (e.g., distribution and fault tolerance) will also be difficult, as these interactions imply trade-off reasoning that may be difficult to formalize. A more practical extension of the Gadfly would be the development of domain models covering other kinds of quality attributes. While some work has been done to formalize static quality attributes such as modifiability, it would be interesting to see if this work could be formalized in such a way that it could be used by the Gadfly. Similarly, design heuristics for narrow ranges of issues such as real-time and fault tolerance could also be developed. Acknowledgments. Special credit to: Mark Simos (Organnon Motives), who originated the Gadfly concept in the mid 1980’s; Paula Matuszek (Loral), whose expertise in reasoning systems made the latest Gadfly possible; and Brian Koehler (US Government) for his security expertise. The SEI is sponsored by the US Department of Defense. References
{"Source-Url": "https://resources.sei.cmu.edu/asset_files/WhitePaper/1996_019_001_23056.pdf", "len_cl100k_base": 7126, "olmocr-version": "0.1.48", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 26863, "total-output-tokens": 8984, "length": "2e12", "weborganizer": {"__label__adult": 0.0003039836883544922, "__label__art_design": 0.0004162788391113281, "__label__crime_law": 0.00028204917907714844, "__label__education_jobs": 0.0005369186401367188, "__label__entertainment": 4.8279762268066406e-05, "__label__fashion_beauty": 0.0001061558723449707, "__label__finance_business": 0.00015091896057128906, "__label__food_dining": 0.00023472309112548828, "__label__games": 0.0004353523254394531, "__label__hardware": 0.0006008148193359375, "__label__health": 0.0002388954162597656, "__label__history": 0.00015974044799804688, "__label__home_hobbies": 6.020069122314453e-05, "__label__industrial": 0.0002460479736328125, "__label__literature": 0.0002219676971435547, "__label__politics": 0.00015687942504882812, "__label__religion": 0.0002994537353515625, "__label__science_tech": 0.008697509765625, "__label__social_life": 5.882978439331055e-05, "__label__software": 0.005863189697265625, "__label__software_dev": 0.98046875, "__label__sports_fitness": 0.0001951456069946289, "__label__transportation": 0.00031280517578125, "__label__travel": 0.0001360177993774414}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42053, 0.01207]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42053, 0.74747]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42053, 0.91285]], "google_gemma-3-12b-it_contains_pii": [[0, 4108, false], [4108, 9460, null], [9460, 14856, null], [14856, 19510, null], [19510, 24858, null], [24858, 29726, null], [29726, 33324, null], [33324, 36983, null], [36983, 42053, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4108, true], [4108, 9460, null], [9460, 14856, null], [14856, 19510, null], [19510, 24858, null], [24858, 29726, null], [29726, 33324, null], [33324, 36983, null], [36983, 42053, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 42053, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42053, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42053, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42053, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42053, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42053, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42053, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42053, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42053, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 42053, null]], "pdf_page_numbers": [[0, 4108, 1], [4108, 9460, 2], [9460, 14856, 3], [14856, 19510, 4], [19510, 24858, 5], [24858, 29726, 6], [29726, 33324, 7], [33324, 36983, 8], [36983, 42053, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42053, 0.0]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
8e977be06cdf9761798e0dc64215f2369214d4be
Version 6.15.0 - (released 6/22/2016) - New feature: Enhanced radio buttons and checkboxes for surveys - A new survey option "enhanced radio buttons and checkboxes" can be found on the Survey Settings page in the Online Designer in which a user can enable the feature so that radio buttons and checkboxes are displayed differently on the survey page, in which they appear as large animated buttons that look more modern and stylish than traditional radios and checkboxes. This new feature can be enabled for any given survey in a project where it will transform *all* radios and checkboxes on the survey into the enhanced version. Note: This feature does not work for radios and checkboxes in a matrix. - Improvement: Server-side field validation - In addition to the existing client-side field validation that is performed on surveys and data entry forms, REDCap will now also perform server-side validation to validate all submitted values prior to saving them to ensure they are valid values. This means verifying the value via a text field's field validation type, or if a multiple choice field, verifying that the value is indeed a valid choice for the field. If they are considered invalid values, then the value will not be saved, and the page will be reloaded with an error message (similar to the Required Fields error message) informing the user that invalid values were entered and should thus be corrected, if desired. This new server-side validation improves the overall quality of data being entered on surveys and form. - New feature: Create custom public survey link - On the "Public Survey Link" page in a project that utilizes surveys, users now have the option to create their own custom public survey link that begins with "http://is.gd" (e.g., http://is.gd/diabeticsurvey), in which the custom URL will simply redirect to the public survey in their project. They may enter a desired URL, and it will check if the URL has already been taken. If not, it will store that custom URL in the project so that it is always able to be obtained on the Public Survey Link page. - New Action Tag: @HIDEBUTTON - Hides the 'Now' or 'Today' button that is typically displayed to the right of date, time, and date/time fields. - New Action Tag: @APPUSERNAME-APP - In the REDCap Mobile App, this action tag sets a field's value to the app username of the current mobile app user - i.e., their username in the mobile app, which is not necessarily the same as their REDCap server username that can be captured using @USERNAME. NOTE: For use only in the REDCap Mobile App. - Improvement: Updated "Help & FAQ" page. Has better navigation and is easier to read. - Improvement/change: If a user has had access to REDCap for more than 7 days and they are logging in to REDCap's home page, then it will redirect them to the My Projects page after a successful login. This is to save them a click, assuming that they have no need to view the home page at this point. Note: Due to certain limitations, this feature is only available for installations using "LDAP", "Table-based", or "LDAP & Table-based" authentication methods. • Improvement: Users can now only send the request one time for moving a project to production or requesting that a production project be deleted. In previous versions, the request could be sent many times and could thus cause confusion for the administrator regarding which request should be processed. Additionally, any user that has submitted either of these types of requests may also manually cancel the request by clicking a "Cancel request" button next to the disabled button where the request was originally submitted. • Improvement: Administrators can now add comments to items in the Control Center To-Do List. A comment can be added or edited for any item in the To-Do List. • Change: Modified the "Table-based User Mgmt" link on the Control Center's left-hand menu so that its text says "Add Users (Table-based Only)" instead for greater clarity. • Change: Added new video "Mobile App Project Setup" on the REDCap Mobile App page in a project that discusses the process of setting up the mobile app for a given project. • Change: If an entire data entry form is disabled due to a user's form-level privileges being set to "read-only", the user would mistakenly not be able to add an E-signature to the form even if they have E-signing privileges. This is inconsistent since they can Lock or Unlock the form but cannot E-sign it. Users with E-signing privileges will now be able to e-sign a data entry form that is disabled. This is allowable since Locking and E-signing privileges are separate from data entry privileges. • Change: A link to the Control Center was added (for super users only) at the top left of a project page (to the right of the "My Projects" link). • Change: All links pointing to pages on the Trac wiki have now been replaced with their corresponding pages on the new REDCap Community website (https://community.projectredcap.org) since the Trac wiki at devguard.com has now been officially retired. • Improvement: A field's Section Header and Field Annotation are now displayed in the Codebook for the project. • Change: Updated some of the language in the Install module to provide better guidance and clarity for the installation process, and also to remove language that caters heavily to phpMyAdmin as a preferable MySQL client. Additionally, text was added to stating that MariaDB is a completely compatible alternative for MySQL as a database back-end. • Change: The attribute autocomplete="off" was added to all text input fields on surveys and data entry forms (and to the form tag itself) to allow institutions to better comply with certain regulatory requirements, even though most modern browsers ignore this attribute. NEW FEATURES & IMPROVEMENTS: • New feature: Administrator To-Do List • New page in the Control Center that allows all REDCap administrator requests to be processed in a single place. This includes approving production drafted changes, API token requests, create/copy projects (if applicable), and move projects to production (if applicable). • All requests will be listed in a table on this page and will include all associated information about the request, such as time of request, requestor, project, request type, etc. • If desired, email notifications can be disabled on this page if administrators no longer wish to receive the emails associated with these requests, but instead wish to solely use the To-Do List page without any email notifications. • NOTE: This page will always reflect the current status of all requests, whether or not they were processed using the tables below or using the link inside the email to the administrator (if email notifications are enabled). • New action tag: @USERNAME - Sets a field’s value to the username of the current REDCap user. If this is used on a survey, the value will be “[survey respondent]”. Once the value is captured, it will not be changed when visiting the page at a later time. • New action tag: @DEFAULT - Sets a field’s initial value. - This action tag allows a field to have a specified default value when viewing the field on a survey or data entry form that has not yet had any data saved for it (i.e., when the form status icon is gray or when a survey has not been started). - The format must follow the pattern @DEFAULT="????", in which the desired default value should be inside single or double quotes. - For checkbox fields, simply separate multiple checkbox values with commas - e.g., @DEFAULT='1,3,6'. NOTE: The default value does *not* get applied during any data imports (via API or Data Import Tool) but only operates when viewing survey pages and data entry forms. - For text fields, you may even perform Piping inside the default value to pipe data from another field in the project - e.g., @DEFAULT="Name: [first_name] [last_name], DOB: [dob]". - NOTE: If being used on a date or datetime field, the date value inside the quotes must be in Y-M-D format - e.g., @DEFAULT='2007-12-25'. - If this action tag is used on a survey question that is utilizing a survey pre-fill method (via query string or POST submit), then the pre-fill values supplied will override the default values provided by the action tag. • New hook: redcap_every_page_top - Allows custom actions to be performed at the top of every page in REDCap (including plugins that render the REDCap page header) • New hook: redcap_every_page_before_render - Allows custom actions to be performed by every PHP script in REDCap (including plugins) before the script itself begins to be formally processed. • Improvement: When in production, users can now request that a project be deleted by an administrator. The request will be added to the To-Do List in the Control Center, and the administrator will be emailed (if email notifications are enabled). • New method for hooks/plugins: REDCap::getCopyright - Returns the REDCap copyright text to be displayed on all pages - i.e., "REDCap X.X.X - © 20XX Vanderbilt University". This is recommended to be used if a hook is utilized to alter an existing REDCap page so much that the normal page footer that contains the REDCap copyright notice is no longer displayed. Thus you may use this method to display the copyright notice on that page but in a different way or in a different location. This is to conform to the REDCap license agreement that stipulates that the REDCap copyright notice should not be removed from any REDCap pages (this excludes plugins). • Change: To be more consistent and simpler with regard to how REDCap administrators are notified about user-submitted requests, the “Person who will approve changes for production projects” option has been removed from the system-level and project-level configurations. Instead, REDCap will now use the “Project Contact Person” name and email for *all* requests rather than using the two options for various requests, which can be confusing regarding which will be used for what type of request. This will keep things much more simplified going further. • Change: On the General Configuration page and Edit A Project's Settings pages in the Control Center, the option “Project Contact Person” has been re-labeled as “Name of REDCap Administrator” to improve clarification regarding what this option refers to. • Change/improvement: Piping can now work recursively in case the initial data that is piped also contains variables that should then be piped. • Change/improvement: The mailto link at the bottom left of a project has now been replaced with a "Contact REDCap administrator" button that, when clicked, opens the user’s default email client and pre-fills the email body with their username, the title of the current project, and a link to the project. This should help administrators in case this information is not provided by the user themselves, which is often the case. • Change: When adding a new field in the Online Designer, the Custom Alignment setting no longer resets back to "Right Vertical (RV)" alignment every time as it did in previous versions, but instead it now reverts to the alignment value of the previous field that was opened beforehand while on that page. • Change/improvement: The Browse Projects page in the Control Center now displays a project's PID (i.e, its project ID number) next to the project title to allow administrators to more easily identify a project, especially when some projects are similarly named and thus difficult to tell apart. • Improvement: Added an "Edit" link on the left-hand project menu in the "Project Bookmarks" panel to allow users to easily navigate to the Project Bookmarks page if they have Project Design/Setup privileges. Improvement: When copying a user role on a project's User Rights page, the Edit Role popup now opens immediately after copying a role to allow the user to more easily modify the newly created role. Change: Small clarification in instruction text when a REDCap administrator is creating an API token for a user. Change: If the REDCap web server already has a large value set for the "max_execution_time" setting in PHP.INI, then REDCap will not lower that setting's value if REDCap's required value is smaller than the system value. Change/improvement: The REDCap installation package now comes with the hook_functions.php file and a hooks directory, and the path to the hook_functions.php file is set automatically during the installation process. Change/improvement: REDCap now uses the value of session.cookie_secure in the PHP.INI configuration file when setting the default cookie parameters. This allows for the "Secure" cookie attribute to be set to True if session.cookie_secure=On in PHP.INI. By default, the "Secure" cookie attribute is set to False. Version 6.13.3 - (released 4/22/2016) - Change: Added a compatibility notice for the embedded audio option for attachments for Descriptive fields on surveys and data entry forms. The compatibility notice informs the user that the embedded audio option for attachments is not 100% compatible for all audio file types across all web browsers. This is not a limitation in REDCap, but is simply a compatibility issue across web browsers. The most compatible audio file types to use are MP3 and WAV. Other audio types may work on some browsers but not in others. Unfortunately, there is not always an easy way to know what audio file will work for which browser, especially as operating systems and web browsers evolve over time. - Improvement: When a user opens the Data History popup or the Data Resolution Workflow popup for a given record/field, the popup should now open a bit more quickly than before if it had been slow in the past, especially for projects with many records and/or data changes. Version 6.13.2 - (released 4/19/2016) - Change: Disabled the backspace-goes-back "feature" of browsers, which could cause unexpected issues and confusion if a user accidentally clicked the backspace button on a page. - Change: Matrix fields are now no longer allowed to have a Field Label. (Ticket #1188) - Change: The width of date and datetime fields was increased on surveys with "Large" or "Very Large" text so that the entire value is always visible. - Change: The Configuration Check page now checks to make sure that the DOM extension in PHP is installed. - Change: The enhancement added in a recent version to prompt the user to have a text field’s value automatically trimmed if the value begins or ends with whitespace was mistakenly being applied to Notes fields when it should have only been applied to Text fields. - New feature: Responsive design of REDCap web pages - Now has a more flexible and responsive user interface to conform to and fit screens on devices of all sizes. - Major improvement for how surveys and data entry forms look on mobile devices (i.e., phones), including automatic font increase and forced left-alignment of questions for better user experience when screen real estate is limited. - REDCap now has the Bootstrap front-end framework embedded inside it, thus allowing plugin/hook developers to utilize all the Bootstrap UI elements and features. - Technical note: The “label” CSS class used for field labels in the question table on surveys and data entry forms has been replaced with “labelrc” to prevent conflicts with Bootstrap. - Improvement: Slider fields on surveys and data entry forms are now much easier to use on mobile/touch devices. - Improvement/change: When a user moves a project to production (or requests to have a project moved to production) on the Project Setup page, it now forces them to choose if they want to delete all project data or to keep all existing records. In previous versions, it would pre-check the “delete all data” checkbox, which could sometimes cause users to unwittingly lose all their data if not paying attention to what they are agreeing to. - Improvement/change: A new system-level setting allows administrators to hide the option where users can export an entire project as a single REDCap XML file (i.e., project backup). Because some institutions are wary of users feeling the need to download an entire project and its data, they may unwittingly download unencrypted project backups (containing data) to store on their local drive, which could be a security or privacy concern. This option can now be disabled on the Modules Configuration page in the Control Center. - Change: The Record Locking Customization page in a project now allows normal users to view the locking and e-signature information in read-only format when in production. In previous versions, only super users were allowed to view this page in production status. - Improvement: The User Rights page in a project now prevents users from mistakenly assigning themselves to a role that does not have User Rights privileges, which could inadvertently cause them to be locked out of that page in their own project. - Change: On the Survey Settings page in the Online Designer, the text for the "Delete Survey" button at the bottom of the page has been changed to "Delete Survey Settings" to reduce confusion regarding what the button does. • New feature: Live Filters for reports • Any report can now have up to 3 fields that can be designated as a Live Filter. The Live Filters are displayed as drop-downs when viewing a report at the top-right of the page, and selecting a Live Filter will cause the report to be re-run in real time using the Live Filter value as a filter. • If exporting a report that has a Live Filter selected, the export popup window will provide an extra choice to allow the user to export the full report data set or to apply the currently selected Live Filter to the report when exporting. • Note: Currently only multiple choice fields can be used as Live Filters (as well as Events, if longitudinal, and Data Access Groups, if any exist). • Improvement: The left-hand menu of each project now has collapsible sections so that a user may collapse the section for easier navigation or to have a more compact page. The collapsed state of each section in each project is remembered using a cookie on the user’s device so that when a user returns to the project in the future, the menu section remains in the same collapsed/non-collapsed state as the last time they viewed it on that device. • Improvement: Performing data exports or viewing reports for projects containing very large amounts records, especially in conjunction with lots of events and/or fields, should not halt the export process very often anymore. In the past this might cause REDCap to display an error message saying "the data export is not able to complete" due to the large amount of data being exported or viewed. In the case when too much web server memory is used during the data export process, REDCap will now invisibly revert to a backup process that utilizes a local temp file on the server for temporarily storing data during the export process (rather than relying on server memory solely for this). This will allow the export process to complete successfully; however, the process will take several times longer to complete than if simply using server memory. • Change: When exporting an entire project as a REDCap Project XML file, it now provides the option "Include all uploaded files and signatures?", which is unchecked by default. In previous versions, it automatically included all uploaded files and signatures in the resulting XML file, but this often caused the export to fail due to the project either containing many files or containing very large files. • Change: A new parameter "exportFiles" (boolean) was added to the REDCap::getProjectXML developer method for plugins and hooks. The parameter, which defaults to FALSE, specifies whether or not the resulting XML will include all files (base64 encoded) that were uploaded for File Upload and Signature fields for all records in the project. Please note that while the previous version (6.12.0) exported all files in the resulting XML by default, it no longer does that and must now be specified explicitly. • Change: A new parameter "exportFiles" (boolean) was added to the "Export Project XML" API method. The parameter, which defaults to FALSE, specifies whether or not the resulting XML will include all files (base64 encoded) that were uploaded for File Upload and Signature fields for all records in the project. Please note that while the previous version (6.12.0) exported all files in the resulting XML by default, it no longer does that and must now be specified explicitly. Version 6.12.0 - (released 2/26/2016) - **NEW FEATURES & IMPROVEMENTS:** - Improvement: New option to download charts displayed on the "Stats & Charts" tab of the "Data Exports, Reports, and Stats" module. The charts will download as PNG image files. - New feature: Users may now export a project’s data in CDISC ODM format. This new option is found on the “Data Exports, Reports, and Stats” page in the data export popup when selecting export format. - New feature: An entire REDCap project can now be exported as a single XML file (which happens to be in CDISC ODM format). The file includes events, arms, instruments, fields, and project attributes – even Descriptive field attachments. If the project contains data, then the user can also optionally export the project data (including uploaded files) in the same XML file. This XML file can serve as a snapshot or backup copy of the project, and can even be imported on the Create New Project page to create a clone (more or less) of the project. - New feature: Create a new project from a REDCap XML file (or other XML file containing metadata in CDISC ODM format). This is a new option on the Create New Project page, which allows the user to optionally upload their XML file rather than choosing a project template or creating the project from scratch. - New and improved SDK developer methods for plugins and hooks - REDCap::getProjectXML – New method – Returns the contents of an entire project (records, events, arms, instruments, fields, and project attributes – even uploaded files and Descriptive field attachments) as a single XML file, which is in CDISC ODM format. - REDCap::getData – Parameter for data format now accepts value of “odm” to export data in CDISC ODM format. This only returns data (not the project structure/metadata). - REDCap::saveData – Parameter for data format now accepts value of “odm” to import data in CDISC ODM format. This only returns data (not the project structure/metadata). - New and improved API methods - Export Project XML – New API method – Returns the contents of an entire project (records, events, arms, instruments, fields, and project attributes – even uploaded files and Descriptive field attachments) as a single XML file, which is in CDISC ODM format. - Export Records – Parameter for data format now accepts value of “odm” to export data in CDISC ODM format. This only returns data (not the project structure/metadata). - Import Records – Parameter for data format now accepts value of “odm” to import data in CDISC ODM format. This only returns data (not the project structure/metadata). - Create Project – New optional parameter named “odm” can be used to pass the ODM XML string of an entire project’s structure (the same as output by the Export Project XML method) when creating a new project using a Super API Token. This will allow you not only to create the project with the API request, but also to import all fields, forms, and project attributes (and events and arms, if longitudinal) as well as record data all at the same time. - Change: Added a new check to confirm that the version directory of the current REDCap version (e.g., redcap_v6.12.0) has not been mistakenly removed from the web server, thus resulting in a strange non-styled Home page or My Projects page. **Version 6.11.5 - (released 2/12/2016)** - New feature: Domain whitelist for cross-domain HTTP access control - By default, for flexibility purposes, AJAX requests (via [JavaScript](https://developer.mozilla.org/en-US/docs/Web/JavaScript)) can be made to REDCap from any domain/URL. If you wish to restrict this so that only certain domains can make cross-domain AJAX requests to REDCap, then you will need to set the domain name of all allowed access control origins (i.e., the domain of the URLs) in the text box to the right. If the text box is left blank (default), then any domain will be able to make cross-domain AJAX requests to REDCap. Restricting access control to specific domains is generally considered to make REDCap more secure to prevent against possible Cross-Site Scripting attacks by malicious users. This setting can be found at the bottom of the Security & Authentication page in the Control Center. - Improvement: When an instrument has been enabled as a survey and the survey has the setting "Auto-continue to next survey" enabled, then a down arrow icon will now appear in the Online Designer for that survey to denote that this setting has been enabled. - Change: New videos for REDCap Mobile App **Version 6.11.3 - (released 1/29/2016)** - Change: "action" was added to the reserved variable name list to prevent users from creating fields with that variable name since it can cause [JavaScript errors](https://developer.mozilla.org/en-US/docs/Web/JavaScript) to occur on a survey or form in certain browsers when the field is used in branching logic. (Ticket #1093) - Change: The Help & FAQ page was updated. - Change/improvement: If the Survey Login feature is enabled in a project, it now offers a "Show value" checkbox immediately below each login field, and when checked it will remove the password mask from the field to allow the participant to view the value as clear text. Removing the mask may be necessary in certain cases, such as entering specially formatted values like dates/times and also when using mobile devices, on which it might be more difficult to type with accuracy. Note: The password mask feature for text fields on the survey... login form were added recently in version 6.11.0, whereas in prior versions the password fields had unmasked clear text values. (Ticket #1084) Version 6.11.2 - (released 1/16/2016) - Change: When performing the field mapping step in the Dynamical Data Pull (DDP) module in a project, it would display a question mark icon next to each field in the tree of source fields even if the metadata web service does not provide a "description" attribute for the field. This could be confusing since the icon would essentially serve no purpose in this case. It now only displays the icon if a description is actually provided by the metadata web service for a given field. - Change: When a project is in production status, it was too difficult for users to find the Check For Identifiers page, so it has now been added to the bottom of the Project Setup page when the project is in production. Version 6.11.1 - (released 12/22/2015) - Change/improvement: When users are being assigned to a role while being granted access to a project on the User Rights page, it now displays a checkbox option to have the user emailed in order to notify them of having been granted access to the project. In previous versions, there was no way to notify a user when being added to a project via role assignment. (Ticket #1051) Version 6.11.0 - (released 12/18/2015) - NEW FEATURES & IMPROVEMENTS: - New API methods (please see the API documentation embedded in REDCap for details regarding these methods) - Arm import/delete - for longitudinal projects only; requires API Import privileges and Project Design/Setup? privileges - Event import/delete - for longitudinal projects only; requires API Import privileges and Project Design/Setup? privileges - Import instrument-event mappings - for longitudinal projects only; requires API Import privileges and Project Design/Setup? privileges - Import metadata, i.e. data dictionary - available only in development status; requires API Import privileges and Project Design/Setup? privileges - Import users (import new users into a project while setting their user privileges, or update the privileges of existing users in the project.) - requires API Import privileges and User Rights privileges - Create project - Allows a user to create a new REDCap project while setting some project attributes, such as project title, project purpose, enable/disable record auto-numbering, enable the project as longitudinal, and enable surveys in the project. - This method requires a Super API Token that must be granted to a user by a REDCap administrator on the API Tokens page in the Control Center. - After the super token has been granted, the user can view the super token on their My Profile page. - Improvement: Added support for hosting REDCap in Google Cloud AppEngine? (with Google Cloud Storage). When hosted on the Google Cloud Platform, you can set file storage option to “Google Cloud Storage” on the File Upload Settings page and provide the names of the buckets where the files will be stored. It also works seamlessly to connect with Google Cloud SQL that would host the MySQL backend for REDCap. - Improvement: REDCap now supports secure connections to MySQL using SSL/TLS. The following PHP variables must be added into database.php in the main "redcap" directory (the first 3 are required at minimum, while the last 2 might be optional for certain configurations). 1. $db_ssl_key = ''; // e.g., '/etc/mysql/ssl/client-key.pem' 2. $db_ssl_cert = ''; // e.g., '/etc/mysql/ssl/client-cert.pem' 3. $db_ssl_ca = ''; // e.g., '/etc/mysql/ssl/ca-cert.pem' 4. $db_ssl_capath = NULL; 5. $db_ssl_cipher = NULL; - Improvement: Users may now download and upload arms and events as a CSV file on the “Define My Events” page, as well as download and upload the instrument-event designations as a CSV file on the “Designate Instruments for My Events” page. Using these methods, users can now fully reconstruct the structure of a project if they wish to copy it, in which they could download the data dictionary file, arms file, events file, event mappings file, and data export file, and then upload all of them into a new project to recreate it. In previous versions, this could only be done for classic projects, but this now allows it to be done for longitudinal projects. When uploading the CSV file for arms, events, or event mappings, it will display a preview to the user to show what changes will be made, such as which things may be added, modified, deleted, or stay the same. - Improvement: “select all” and “deselect all” links were added to the “Designate Instruments for My Events” page to allow users to more easily check off the checkboxes if many instruments and/or events exist in the project. - Improvement: When assigning projects to Project Folders, there is now a checkbox option to hide archived projects in the project list. This should make it easier for users to ignore those projects during the folder assignment process. - Improvement: A new optional API parameter named "filterLogic" was API method “Export Records”. filterLogic should be a string of logic text (e.g., [age] > 30) for filtering the data to be returned by this API method, in which the API will only return the records (or record-events, if a longitudinal project) where the logic evaluates as TRUE. This parameter is blank/null by default unless a value is supplied. Please note that if the filter logic contains any incorrect syntax, the API will respond with an error message. - **Improvement:** The Activity Graphs page in the Control Center now includes two new charts: 1) Database Usage (MB), and 2) Usage by Uploaded Files (MB).* **BUG FIXES & OTHER CHANGES:** - **Change/improvement:** If the Survey Login feature is enabled in a project, it now performs a password mask for the text fields on the survey login form in order to obscure the participant's password value(s). In previous versions, the password fields were displayed as clear text. - **Changes to existing API methods** - **Change:** For the API method “Export Users”, many more user privilege rights are included in the response. The following is the full header list: username,email,firstname,lastname,expiration,data_access_group,data_access_group_id,design,user_rights,data_access_groups,data_export,reports,stats_and_charts,manage_survey_participants,calendar,data_import_tool,data_comparison_tool,logging,file_repository,data_quality_create,data_quality_execute,api_export,api_import,mobile_app,mobile_app_download_data,record_create,record_rename,record_delete,lock_records_all_forms,lock_records,lock_records_customization,forms - **Change:** For the API method “Export Users”, when requesting a response in CSV format, form-level rights are returned in a different format in order to prevent possible duplication of other new user privileges that are returned, in which all form rights will now be consolidated into a single column named “forms” (whereas in previous versions each form was represented as an individual column). The last column of the CSV string returned will have “forms” as the header, and the value will be each [unique] form name and its numerical value as a colon-separated pair with all the form value pairs strung together as a single comma-separated string (e.g. “demographics:1,visit_data:3,baseline:1”). See a full CSV example below of two users exported from a project. username,email,firstname,lastname,expiration,data_access_group,data_access_group_id,design,user_rights,data_access_groups,data_export,reports,stats_and_charts,manage_survey_participants,calendar,data_import_tool,data_comparison_tool,logging,file_repository,data_quality_create,data_quality_execute,api_export,api_import,mobile_app,mobile_app_download_data,record_create,record_rename,record_delete,lock_records_all_forms,lock_records,lock_records_customization,forms harrisp@gmail.com,Joe,User1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,"demographics:3,baseline_data:1,visit_lab_data:1,patient_morale_questionnaire:1,visit_blood_workup:1,completion_data:1,completion_project_questionnaire:1,visit_observed_behavior:1" taylorr4@gmail.com,Joe,User,2015-12-08,group_a,1,0,0,2,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,"demographics:3,baseline_data:1,visit_lab_data:1,patient_morale_questionnaire:1,visit_blood_workup:1,completion_data:1,completion_project_questionnaire:1,visit_observed_behavior:1" - Change: For the API method “Export Users”, when requesting a response in XML format, the main parent tags at the beginning and end of the response will no longer be <records> but instead will be <users> to be less confusing (since “records” often denotes something else in REDCap) and also to be more consistent with how other API methods return XML items. - Change: For the API method “Export Users”, the new “data_access_group_id” field was added, in which it returns the numerical group ID number that the “data_access_group” field used to return in previous versions. And now, the unique group name of a user’s Data Access Group is returned for the “data_access_group” field rather than the numerical group ID number. - Change: The API method “Export Instrument-Event Mappings” now returns a different structure if exporting as JSON or XML (however, the CSV format will remain the same). It will now export with “arm_num”, “unique_event_name”, and “form” as attributes of each item/mapping, as seen in the JSON/XML examples below. - JSON example: ```json``` ```json``` ```json``` ```json``` ```xml``` ```xml``` - Improvement: For “Export Project Information” API method, the following two project attributes were added: - secondary_unique_field – The variable name of the secondary unique field defined in the project (if applicable). - display_today_now_button – Value will be “0” or “1” (i.e. False or True). If “0”, then do NOT display the today/now button next to date/datetime fields on data entry forms and surveys. If “1” (default), display them. - Change: When using an API token associated with a super user account, the API now recognizes the API user as having maximum privileges (i.e., super user privileges) with regard to API requests, whereas in previous versions it only inferred the user’s privileges literally from what is defined on the project’s User Rights page, which was inconsistent with how super user rights are recognized by REDCap in the front-end user interface. - Change/improvement: The Control Center’s System Statistics page now has the counts for Total Logged Events and Dynamic Data Pull (DDP) separated as separate AJAX calls since it was causing the whole table to load very slowly on the page. • Change: If using Google OpenID authentication and a user logs in for the first time, it will now capture the user's first name, last name, and email address and add them to the user's REDCap account automatically. • Improvement: When installing REDCap, it is now possible to use the MySQL socket value in the database configuration by adding the PHP variable $db_socket to database.php in the main "redcap" directory.
{"Source-Url": "https://redcap.ucsf.edu/announcement/REDCap_V6.15_NewFeatures.pdf", "len_cl100k_base": 8010, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 33907, "total-output-tokens": 8812, "length": "2e12", "weborganizer": {"__label__adult": 0.00032258033752441406, "__label__art_design": 0.0003571510314941406, "__label__crime_law": 0.00024509429931640625, "__label__education_jobs": 0.0014934539794921875, "__label__entertainment": 8.499622344970703e-05, "__label__fashion_beauty": 0.00010925531387329102, "__label__finance_business": 0.0003495216369628906, "__label__food_dining": 0.00021588802337646484, "__label__games": 0.0006337165832519531, "__label__hardware": 0.0005130767822265625, "__label__health": 0.0003840923309326172, "__label__history": 0.00014901161193847656, "__label__home_hobbies": 0.00010991096496582033, "__label__industrial": 0.00018906593322753904, "__label__literature": 0.00020062923431396484, "__label__politics": 0.00014781951904296875, "__label__religion": 0.00035500526428222656, "__label__science_tech": 0.002201080322265625, "__label__social_life": 0.00021767616271972656, "__label__software": 0.112548828125, "__label__software_dev": 0.87841796875, "__label__sports_fitness": 0.00022912025451660156, "__label__transportation": 0.0001302957534790039, "__label__travel": 0.00020885467529296875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37492, 0.03089]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37492, 0.07136]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37492, 0.86975]], "google_gemma-3-12b-it_contains_pii": [[0, 3129, false], [3129, 5608, null], [5608, 8411, null], [8411, 11863, null], [11863, 14424, null], [14424, 17439, null], [17439, 20750, null], [20750, 23328, null], [23328, 26378, null], [26378, 28643, null], [28643, 31750, null], [31750, 34810, null], [34810, 37072, null], [37072, 37492, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3129, true], [3129, 5608, null], [5608, 8411, null], [8411, 11863, null], [11863, 14424, null], [14424, 17439, null], [17439, 20750, null], [20750, 23328, null], [23328, 26378, null], [26378, 28643, null], [28643, 31750, null], [31750, 34810, null], [34810, 37072, null], [37072, 37492, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 37492, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37492, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37492, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37492, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37492, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37492, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37492, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37492, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37492, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37492, null]], "pdf_page_numbers": [[0, 3129, 1], [3129, 5608, 2], [5608, 8411, 3], [8411, 11863, 4], [11863, 14424, 5], [14424, 17439, 6], [17439, 20750, 7], [20750, 23328, 8], [23328, 26378, 9], [26378, 28643, 10], [28643, 31750, 11], [31750, 34810, 12], [34810, 37072, 13], [37072, 37492, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37492, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
4689cd6af5b28a42da2ccc14dd439080078faa9a
System Calls Contents - A high-level view of System Calls - Mostly from the user’s perspective - From textbook (section 1.6) - A look at the R3000 - A brief overview - Mostly focused on exception handling - From “Hardware Guide” on class web site - Allow me to provide “real” examples of theory - System Call implementation - Case Study: OS/161 system call handling Operating System System Calls Kernel Level Applications Operating System Requests (System Calls) User Level Applications Applications A Brief Overview of Classes System Calls - From the user’s perspective - Process Management - File I/O - Directories management - Some other selected Calls - There are many more - On Linux, see man syscalls for a list System Calls - Can be viewed as special procedure calls - Provides for a controlled entry into the kernel - While in kernel, they perform a privileged operation - Returns to original caller with the result - The system call interface represents the abstract machine provided by the operating system. Some System Calls For Process Management Some System Calls For File Management <table> <thead> <tr> <th>Call</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>f = open(a file, mode, ...)</td> <td>Open a file for reading, writing or both</td> </tr> <tr> <td>f = create()</td> <td>Create a new file</td> </tr> <tr> <td>r = read(a buffer, position)</td> <td>Read data from the file into a buffer</td> </tr> <tr> <td>w = write(a buffer)</td> <td>Write data from a buffer into a file</td> </tr> <tr> <td>p = seek(a file position, whence)</td> <td>Move the file position</td> </tr> <tr> <td>a = stat(file, &amp;status)</td> <td>Get a file status information</td> </tr> </tbody> </table> Some System Calls For Directory Management <table> <thead> <tr> <th>Call</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>d = read(a directory)</td> <td>Read a file descriptor</td> </tr> <tr> <td>d = mkdir(a directory)</td> <td>Create a new directory</td> </tr> <tr> <td>d = rmdir(a directory)</td> <td>Remove a directory</td> </tr> <tr> <td>d = stat(a directory)</td> <td>Get a directory status</td> </tr> <tr> <td>d = list(a directory)</td> <td>List the files in a directory</td> </tr> <tr> <td>d = link(a file)</td> <td>Create a new file link</td> </tr> <tr> <td>d = unlink(a file)</td> <td>Remove a file link</td> </tr> </tbody> </table> Some System Calls For Miscellaneous Tasks <table> <thead> <tr> <th>Call</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>c = connect()</td> <td>Connect to a server</td> </tr> <tr> <td>c = disconnect()</td> <td>Disconnect from a server</td> </tr> <tr> <td>c = listen()</td> <td>Listen for a connection</td> </tr> <tr> <td>c = accept()</td> <td>Accept a connection</td> </tr> <tr> <td>c = send()</td> <td>Send data to a client</td> </tr> <tr> <td>c = recv()</td> <td>Receive data from a client</td> </tr> </tbody> </table> System Calls - A stripped down shell: ```c while (TRUE) { type_prompt(); /* display prompt */ read_command(command, parameters); /* input from terminal */ if (fork() != 0) { /* fork off child process */ /* Parent code */ waitpid(-1, &status, 0); /* wait for child to exit */ } else { /* Child code */ execve(command, parameters, 0); /* execute command */ } } ``` System Calls Some Win32 API calls The MIPS R2000/R3000 - Before looking at system call mechanics in some detail, we need a basic understanding of the MIPS R3000 MIPS R3000 - RISC architecture – 5 stage pipeline ![MIPS R3000 Pipeline Diagram](image) - Load/store architecture - No instructions that operate on memory except load and store - Simple load/stores to/from memory from/to registers - Store word: `sw r4, (r5)` - Store contents of `r4` in memory using address contained in register `r5` - Load word: `lw r3, (r7)` - Load contents of memory into `r3` using address contained in `r7` - Delay of one instruction after load before data available in destination register - Must always an instruction between a load from memory and the subsequent use of the register. - `lw, sw, lw, lb, sb, sh, sh... - Arithmetic and logical operations are register to register operations - E.g., `add r3, r2, r1` - No arithmetic operations on memory - Example - `add r3, r2, r1` ⇒ `r3 = r2 + r1` - Some other instructions - `add, sub, and, or, xor, sll, srl` MIPS Registers - User-mode accessible registers - 32 general purpose registers - `r0` hardwired to zero - `r31` the link register for jump-and-link (JAL) instruction - HI/LO - 2 * 32-bits for multiply and divide - PC - Not directly visible - Modified implicitly by jump and branch instructions Branching and Jumping - Branching and jumping have a branch delay slot - The instruction following a branch or jump is always executed - `sw $0, ($3)` - `j 1f` - `li $2, 1` - `1: sw $2, ($3)` Jump and Link - JAL is used to implement function calls - r31 = PC + 8 - Jump Register (JR) is used to return from function call R3000 Address Space Layout - kseg0: - 512 megabytes - Fixed translation window to physical memory - 0x80000000 - 0xffffffff virtual = 0x00000000 - 0x1fffffff physical - MMU not used - Cacheable - Only kernel-mode accessible - Where the kernel code is placed - kseg1: - 512 megabytes - Fixed translation window to physical memory - 0xa0000000 - 0xbfffffff virtual = 0x00000000 - 0x1fffffff physical - MMU not used - NOT cacheable - Only kernel-mode accessible - Where devices are accessed (and boot ROM) - kseg2: - 1024 megabytes - MMU translated (mapped) - Cacheable - Only kernel-mode accessible System/161 Aside - System/161 simulates an R3000 without a cache. - You don’t need to worry about cache issues with programming OS161 running on System/161 Coprocessor 0 - The processor control registers are located in CP0 - Exception management registers - Translation management registers - CP0 is manipulated using mtc0 (move to) and mfc0 (move from) instructions - mtc0/mfc0 are only accessible in kernel mode. CP0 Registers - Exception Management - c0_cause - Cause of the recent exception - c0_status - Current status of the CPU - c0_epc - Address of the instruction that caused the exception - Note the BD bit in c0_cause - c0_badvaddr - Address accessed that caused the exception - Miscellaneous - c0_prid - Processor Identifier - Memory Management - c0_index - c0_random - c0_entryhi - c0_entrylo - c0_context - More about these later in course C0_registers - For practical purposes, you can ignore these bits - Green background is the focus - CU0-3 - Enable access to coprocessors (1 = enable) - CU0 never enabled for user mode - Always accessible in kernel mode regardless of setting - CU1 is floating point unit (if present, FPU not in sys161) - CU2-3 reserved C0_status - RE - Reverse endian - BEV - Boot exception vectors - 1 = use ROM exception vectors - 0 = use RAM exception vectors - TS - TLB shutdown (1 = duplicate entry, need a hardware reset) C0_status - PE - Parity error in cache - CM - Cache management - PZ - Cache parity zero - SwC - Access instruction cache as data - IsC - Isolate data cache C0_status - IM - Individual interrupt mask bits - 6 external - 2 software - KU - 0 = kernel - 1 = user mode - IE - 0 = all interrupts masked - 1 = interrupts enable - Mask determined via IM bits - c, p, o = current, previous, old c0_cause <table> <thead> <tr> <th>ED</th> <th>CE</th> <th>IP</th> <th>IP</th> <th>ExcCode</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>0</td> <td>0</td> <td>15</td> <td>15</td> </tr> </tbody> </table> Figure 3.3. Fields in the Cause register - **IP**: Intermits pending - 8 bits indicating current state of interrupt lines - **CE**: Coprocessor error - Attempt to access disabled Copro. - **BD**: If set, the instruction that caused the exception was in a branch delay slot - **ExcCode**: The code number of the exception taken ### Exception Codes <table> <thead> <tr> <th>ExcCode</th> <th>Mnemonic</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>Int</td> <td>Interrupt</td> </tr> </tbody> </table> | 1 | SI:EB | TLB modified | 2 | TL:LE | TLB load/TLB store | 4 | Ad:EL | Address error (on load/4-bit or store) | 5 | MEM:ES | Attempt to read a word or half word at an unaligned address Table 3.2. ExcCode values: different kinds of exceptions ### c0_epc - The Exception Program Counter - The address of where to restart execution after handling the exception or interrupt - BD-bit in c0_cause is used on rare occasions when one needs to identify the actual exception-causing instruction - Example - Assume `sw r3, (r4)` causes a page fault exception ``` sw r3, (r4) nop sw r3, (r4) ``` ### c0_badvaddr - The address access that caused the exception - Set if exception is - MMU related - Access to kernel space from user-mode - Unaligned memory access - 4-byte words must be aligned on a 4-byte boundary ### Exception Vectors <table> <thead> <tr> <th>Program address</th> <th>&quot;segment&quot;</th> <th>Physical Addresses</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>0x8000 0000</td> <td>kseg0</td> <td>0x8000 0000</td> <td>TLB misses on ECX reference only</td> </tr> <tr> <td>0x8000 0000</td> <td>kseg0</td> <td>0x8000 0000</td> <td>All other exceptions</td> </tr> <tr> <td>0x0000 0000</td> <td>kseg0</td> <td>0x0000 0000</td> <td>Unachieved alternative to TLB miss entry point used if SR bit is set</td> </tr> <tr> <td>0x0000 0000</td> <td>kseg0</td> <td>0x0000 0000</td> <td>Unachieved alternative for all other exceptions, used if SC bit is set</td> </tr> <tr> <td>0x0000 0000</td> <td>kseg0</td> <td>0x0000 0000</td> <td>The &quot;reset exception&quot;</td> </tr> </tbody> </table> Table 4.1. Reset and exception entry points (vectors) for x86 family Hardware exception handling - Let's now walk through an exception - Assume an interrupt occurred as the previous instruction completed - Note: We are in user mode with interrupts enabled <table> <thead> <tr> <th>PC</th> <th>EPC</th> </tr> </thead> <tbody> <tr> <td>0x12345678</td> <td>?</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Cause</th> <th>Status</th> </tr> </thead> <tbody> <tr> <td></td> <td>KU IE K Up IE K Uc IE c</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Badvaddr</th> <th></th> </tr> </thead> </table> - Instruction address at which to restart after the interrupt is transferred to EPC <table> <thead> <tr> <th>PC</th> <th>EPC</th> </tr> </thead> <tbody> <tr> <td>0x12345678</td> <td>0x12345678</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Cause</th> <th>Status</th> </tr> </thead> <tbody> <tr> <td></td> <td>KU IE K Up IE K Uc IE c</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Badvaddr</th> <th></th> </tr> </thead> </table> - CPU is now running in kernel mode at 0x80000080, with interrupts disabled - All information required to: - Find out what caused the exception - Restart after exception handling is in coprocessor registers <table> <thead> <tr> <th>PC</th> <th>EPC</th> </tr> </thead> <tbody> <tr> <td>0x80000080</td> <td>0x12345678</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Cause</th> <th>Status</th> </tr> </thead> <tbody> <tr> <td>0x12345678</td> <td>KU IE K Up IE K Uc IE c</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Badvaddr</th> <th></th> </tr> </thead> </table> Returning from an exception • For now, let's ignore – how the exception is actually handled – how user-level registers are preserved • Let's simply look at how we return from the exception Returning from an exception PC 0x80001234 EPC 0x12345678 • This code to return is lw r27, saved_epc nop jr r27 rfe Load the contents of EPC which is usually saved somewhere when the exception was originally taken Returning from an exception PC 0x12345678 EPC 0x12345678 • This code to return is lw r27, saved_epc nop jr r27 rfe Store the EPC back in the PC Returning from an exception PC 0x12345678 EPC 0x12345678 • We are now back in the same state we were in when the exception happened Returning from an exception PC 0x12345678 EPC 0x12345678 • This code to return is lw r27, saved_epc nop jr r27 rfe Load the contents of EPC which is usually saved somewhere when the exception was originally taken In the branch delay slot, execute a restore from exception instruction Returning from an exception PC 0x12345678 EPC 0x12345678 • This code to return is lw r27, saved_epc nop jr r27 rfe Badvaddr Store the EPC back in the PC In the branch delay slot, execute a restore from exception instruction Function Stack Frames • Each function call allocates a new stack frame for local variables, the return address, previous frame pointer etc. • Example: assume f1() calls f2(), which calls f3(). Function Stack Frames - Each function call allocates a new stack frame for local variables, the return address, previous frame pointer etc. - Example: assume f1() calls f2(), which calls f3(). Software Register Conventions - Given 32 registers, which registers are used for: - Local variables? - Argument passing? - Function call results? - Stack Pointer? Stack Frame - MIPS calling convention for gcc - Args 1-4 have space reserved for them Example Code ```c main () { int sixargs(int a, int b, int c, int d, int e, int f); int i; i = sixargs(1, 2, 3, 4, 5, 6); return a + b + c + d + e + f; } ``` System Calls Continued User and Kernel Execution • Simplistically, execution state consists of – Registers, processor mode, PC, SP • User applications and the kernel have their own execution state. • System call mechanism safely transfers from user execution to kernel execution and back. System Call Mechanism in Principle • Processor mode – Switched from user-mode to kernel-mode – Switched back when returning to user mode • SP – User-level SP is saved and a kernel SP is initialised – User-level SP restored when returning to user mode • PC – User-level PC is saved and PC set to kernel entry point – User-level PC restored when returning to user-level – Kernel entry via the designated entry point must be strictly enforced System Call Mechanism in Principle - Registers - Set at user-level to indicate system call type and its arguments - A convention between applications and the kernel - Some registers are preserved at user-level or kernel-level in order to restart user-level execution - Depends on language calling convention etc. - Result of system call placed in registers when returning to user-level - Another convention Why do we need system calls? - Why not simply jump into the kernel via a function call??? - Function calls do not change from user to kernel mode - and eventually back again - Restrict possible entry points to secure locations Steps in Making a System Call There are 11 steps in making the system call read (fd, buffer, nbytes) MIPS System Calls - System calls are invoked via a syscall instruction. - The syscall instruction causes an exception and transfers control to the general exception handler - A convention (an agreement between the kernel and applications) is required as to how user-level software indicates - Which system call is required - Where its arguments are - Where the result should go OS/161 Systems Calls - OS/161 uses the following conventions - Arguments are passed and returned via the normal C function calling convention - Additionally - Reg v0 contains the system call number - On return, reg a3 contains - 0: if success, v0 contains successful result - not 0: if failure, v0 has the errno. - v0 stored in errno - -1 returned in v0 CAUTION - Seriously low-level code follows - This code is not for the faint hearted User-Level System Call Walk Through ```c int read(int filehandle, void *buffer, size_t size) ``` - Three arguments, one return value - Code fragment calling the read function ```assembly 400124: 02602021 move a0,a3 400128: 27a50010 addiu a1,sp,16 40012c: 0c1001a3 jal 40068c <read> 400130: 24060201 move s0,v0 400134: 00400016 blez s0,400194 <docat+0x94> ``` - Args are loaded, return value is tested The read() syscall function part 1 ```assembly 0040068c <read>: 40068c: 08100190 j 400640 <__syscall> ``` - Appropriate registers are preserved - Arguments (a0-a3), return address (ra), etc. - The syscall number (5) is loaded into v0 - Jump (not jump and link) to the common syscall routine The read() syscall function part 2 ```assembly 00400640 <__syscall>: 400640: 0000000c syscall ``` - Test success, if yes, branch to return from function The read() syscall function part 2 ```assembly 00400640 <__syscall>: 400644: 0e000005 beqz a3,40065c <__syscall+0x1c> ``` - If failure, store code in errno The read() syscall function part 2 ```assembly 00400640 <__syscall>: 400654: 2403ffff li v1,-1 ``` - Set read() result to -1 ```assembly 400658: 2402ffff li v0,-1 40065c: 03e00008 jr ra 400660: 00000000 nop 400664: 00000000 nop 400668: 00000000 nop 40066c: 3c011000 lui at,0x1000 400670: ac220000 sw v0,0(at) 400674: 400658: 2402ffff li v0,-1 40065c: 03e00008 jr ra 400660: 00000000 nop ``` - If failure, store code in errno The read() syscall function part 2 Return to location after where read() was called Summary • From the caller’s perspective, the read() system call behaves like a normal function call – It preserves the calling convention of the language • However, the actual function implements its own convention by agreement with the kernel – Our OS/161 example assumes the kernel preserves appropriate registers(s0-s8, sp, gp, ra). • Most languages have similar support libraries that interface with the operating system. System Calls - Kernel Side • Things left to do – Change to kernel stack – Preserve registers by saving to memory (the stack) – Leave saved registers somewhere accessible to • Read arguments • Store return values – Do the “read()” – Restore registers – Switch back to user stack – Return to application Note k0, k1 registers available for kernel use exception: move k1, sp /* Save previous stack pointer in k1 */ mfc0 k0, c0_status /* Get status register */ andi k0, k0, CST_Kup /* Check the we-were-in-user-mode bit */ beq k0, $0, 1f /* If clear, from kernel, already have stack */ nop /* delay slot */ /* Coming from user mode - load kernel stack into sp */ lw k0, curkstack /* get address of "curkstack" */ w sp, 0(k0) /* load */ 1: mfc0 k0, c0_cause /* Now, load the exception cause. */ j common_exception /* Skip to common code */ nop /* delay slot for the load */ common_exception: /* * At this point: * * Interrupts are off. (The processor did this for us.) * * k0 contains the exception cause value. * * k1 contains the old stack pointer. * * sp points into the kernel stack. * * All other registers are untouched. */ /* * Allocate stack space for 37 words to hold the trap frame, * plus four more words for a minimal argument block. */ addi sp, sp, -164 These six stores are a "hack" to avoid confusing GDB. You can ignore the details of why and how. Save all the registers on the kernel stack. Create a pointer to the base of the saved registers and state in the first argument register. We can now use the other registers (t0, t1) that we have preserved on the stack. By creating a pointer to here of type struct trapframe *, we can access the user's saved registers as normal variables within 'C'. Struct trapframe { u_int32_t tf_vaddr; /* vaddr register */ u_int32_t tf_status; /* status register */ u_int32_t tf_cause; /* cause register */ u_int32_t tf_lo; u_int32_t tf_hi; u_int32_t tf_ra;/* Saved register 31 */ u_int32_t tf_at;/* Saved register 1 (AT) */ u_int32_t tf_v0;/* Saved register 2 (v0) */ u_int32_t tf_v1;/* etc. */ u_int32_t tf_a0; u_int32_t tf_a1; u_int32_t tf_a2; u_int32_t tf_a3; u_int32_t tf_t0; /* dummy */ u_int32_t tf_t1; /* dummy */ u_int32_t tf_t2; /* dummy */ u_int32_t tf_t3; /* dummy */ u_int32_t tf_t4; /* dummy */ u_int32_t tf_t5; /* dummy */ u_int32_t tf_t6; /* dummy */ u_int32_t tf_t7; u_int32_t tf_s0; /* dummy (see exception.S comments) */ u_int32_t tf_s1; /* dummy */ u_int32_t tf_s2; /* dummy */ u_int32_t tf_s3; /* dummy */ u_int32_t tf_s4; /* dummy */ u_int32_t tf_s5; /* dummy */ u_int32_t tf_s6; /* dummy */ u_int32_t tf_s7; /* dummy */ u_int32_t tf_epc; /* coprocessor 0 epc register */ }; Now we arrive in the ‘C’ kernel /* General trap (exception) handling function for mips. * This is called by the assembly-language exception handler once * the trapframe has been set up. */ void mips_trap(struct trapframe *tf) { u_int32_t code, isutlb, iskern; int savespl; /* The trap frame is supposed to be 37 registers long. */ assert(sizeof(struct trapframe)==(37*4)); /* Save the value of curspl, which belongs to the old context. */ savespl = curspl; /* Right now, interrupts should be off. */ curspl = SPL_HIGH; exception_return: /* 16(sp) no need to restore tf_vaddr */ lw t0, 20(sp) /* load status register value into t0 */ nop /* load delay slot */ mtc0 t0, c0_status /* store it back to coprocessor 0 */ /* 24(sp) no need to restore tf_cause */ /* restore special registers */ lw t1, 24(sp) lw t0, 32(sp) mtlc t1 mthi t0 /* load the general registers */ lw ra, 36(sp) lw AT, 40(sp) lw v0, 44(sp) lw v1, 48(sp) lw a0, 52(sp) lw a1, 56(sp) lw a2, 60(sp) lw a3, 64(sp) lw gp, 68(sp) lw t0, 72(sp) lw t1, 76(sp) lw t2, 80(sp) lw t3, 84(sp) lw t4, 88(sp) lw t5, 92(sp) lw t6, 96(sp) lw t7, 100(sp) lw t8, 104(sp) lw t9, 108(sp) lw s0, 112(sp) lw s1, 116(sp) lw s2, 120(sp) lw s3, 124(sp) lw s4, 128(sp) lw s5, 132(sp) lw s6, 136(sp) lw s7, 140(sp) lw sp, 144(sp) /* 140(sp) "saved" k0 was dummy garbage anyway */ /* 144(sp) "saved" k1 was dummy garbage anyway */ lw gp, 148(sp) /* restore gp */ /* 152(sp) stack pointer - below */ lw s8, 156(sp) /* restore s8 */ lw k0, 160(sp) /* fetch exception return PC into k0 */ lw sp, 152(sp) /* fetch saved sp (must be last) */ /* done */ jr k0 /* jump back */ rfe /* in delay slot */ .end common_exception Note again that only k0, k1 have been trashed What happens next? - The kernel deals with whatever caused the exception - Syscall - Interrupt - Page fault - It potentially modifies the trapframe, etc - E.g., Store return code in v0, zero in a3 - `mips_trap` eventually returns
{"Source-Url": "http://cgi.cse.unsw.edu.au/~cs3231/06s1/lectures/lect03x6.pdf", "len_cl100k_base": 6820, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 40094, "total-output-tokens": 7652, "length": "2e12", "weborganizer": {"__label__adult": 0.00038361549377441406, "__label__art_design": 0.00028252601623535156, "__label__crime_law": 0.0002853870391845703, "__label__education_jobs": 0.00030922889709472656, "__label__entertainment": 6.908178329467773e-05, "__label__fashion_beauty": 0.0001398324966430664, "__label__finance_business": 0.00010859966278076172, "__label__food_dining": 0.000415802001953125, "__label__games": 0.0006971359252929688, "__label__hardware": 0.005863189697265625, "__label__health": 0.0003445148468017578, "__label__history": 0.0002275705337524414, "__label__home_hobbies": 0.00012242794036865234, "__label__industrial": 0.0006518363952636719, "__label__literature": 0.00018537044525146484, "__label__politics": 0.00020229816436767575, "__label__religion": 0.000507354736328125, "__label__science_tech": 0.023468017578125, "__label__social_life": 5.507469177246094e-05, "__label__software": 0.006870269775390625, "__label__software_dev": 0.9580078125, "__label__sports_fitness": 0.0003173351287841797, "__label__transportation": 0.0005102157592773438, "__label__travel": 0.00018310546875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22291, 0.06947]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22291, 0.33065]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22291, 0.70571]], "google_gemma-3-12b-it_contains_pii": [[0, 1110, false], [1110, 3622, null], [3622, 5082, null], [5082, 6019, null], [6019, 7749, null], [7749, 9905, null], [9905, 10927, null], [10927, 12367, null], [12367, 12999, null], [12999, 13759, null], [13759, 15391, null], [15391, 16837, null], [16837, 18654, null], [18654, 20102, null], [20102, 22291, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1110, true], [1110, 3622, null], [3622, 5082, null], [5082, 6019, null], [6019, 7749, null], [7749, 9905, null], [9905, 10927, null], [10927, 12367, null], [12367, 12999, null], [12999, 13759, null], [13759, 15391, null], [15391, 16837, null], [16837, 18654, null], [18654, 20102, null], [20102, 22291, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22291, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 22291, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22291, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22291, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22291, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22291, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22291, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22291, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22291, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 22291, null]], "pdf_page_numbers": [[0, 1110, 1], [1110, 3622, 2], [3622, 5082, 3], [5082, 6019, 4], [6019, 7749, 5], [7749, 9905, 6], [9905, 10927, 7], [10927, 12367, 8], [12367, 12999, 9], [12999, 13759, 10], [13759, 15391, 11], [15391, 16837, 12], [16837, 18654, 13], [18654, 20102, 14], [20102, 22291, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22291, 0.09688]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
397744b46a955d24172d61fcd6bc5f2f20308ff9
Verifying Bit-vector Invertibility Conditions in Coq – Extended Abstract* Burak Ekici University of Innsbruck Innsbruck, Austria burak.ekici@uibk.ac.at Arjun Viswanathan University of Iowa Iowa City, USA arjun-viswanathan@uiowa.edu Yoni Zohar Stanford University Stanford, USA yoniz@cs.stanford.edu Clark Barrett Stanford University Stanford, USA barrett@cs.stanford.edu Cesare Tinelli University of Iowa Iowa City, USA cesare-tinelli@uiowa.edu This work is a part of an ongoing effort to prove the correctness of invertibility conditions for the theory of fixed-width bit-vectors, which are used to solve quantified bit-vector formulas in the Satisfiability Modulo Theories (SMT) solver CVC4. While many of these were proved in a completely automatic fashion for any bit-width, some were only proved for bit-widths up to 65, even though they are being used to solve formulas over arbitrary bit-widths. In this paper we describe our initial efforts in proving a subset of these invertibility conditions in the Coq proof assistant. We describe the Coq library that we use, as well as the extensions that we introduced to it. 1 Introduction Reasoning logically about bit-vectors is useful for many applications in hardware and software verification. While Satisfiability Modulo Theories (SMT) solvers are able to reason about bit-vectors of fixed width, they currently require all widths to be expressed concretely (by a numeral) in their input formulas. For this reason, they cannot be used to prove properties of bit-vector operators that are parametric in the bit-width such as, for instance, the associativity of bit-vector concatenation. Proof assistants such as Coq [13], that have direct support for dependent types are better suited for such tasks. Bit-vector formulas that are parametric in the bit-width arise in the verification of parametric Boolean functions and circuits (see, e.g., [8]). In our case, we are mainly interested in parametric lemmas that are relevant to internal techniques of SMT solvers for the theory of fixed-width bit-vectors. Such techniques are developed a priori for every possible bit-width, even though they are applied on a particular bit-width. Meta-reasoning about the correctness of such solvers then requires bit-width independent reasoning. An example of the latter kind, which is the focus of the current paper, is the notion of invertibility conditions [9] as a basis for a quantifier-instantiation technique to reason about the satisfiability of quantified bit-vector formulas. For a trivial case of an invertibility condition consider the equation $x + s = t$ where $x$, $s$ and $t$ are variables of the same bit-vector sort, and $+$ is bit-vector addition. In the terminology of Niemietz et al. [9], this equation is “invertible” for $x$, i.e., solvable for $x$, for any value of $s$ and $t$. A general solution is represented by the term $t - s$. Since the solution is unconditional, the invertibility condition for $x + s = t$ is simply the universally true formula $\top$. The formula stating this fact, referred to here as an invertibility equivalence, is $\top \iff \exists x. x + s = t$, a valid formula in the theory of fixed-width bit-vectors. --- *This work has been partially supported by the Austrian Science Fund (FWF) grant P26201, the European Research Council (ERC) Grant No. 714034 SMART, DARPA award N66001-18-C-4012, and ONR contract N68335-17-C-0558. Haniel Barbosa and Giselle Reis (Eds.): Sixth Workshop on Proof eXchange for Theorem Proving (PxTP) EPTCS 301, 2019, pp. 18–26, doi:10.4204/EPTCS.301.4 © B. Ekici, A. Viswanathan, Y. Zohar, C. Barrett, and C. Tinelli This work is licensed under the Creative Commons Attribution License. bit-vectors for any bit-width $n$ for $x$, $s$ and $t$. In contrast, the equation $x \cdot s = t$ is not always invertible for $x$ (· stands for bit-vector multiplication). A necessary and sufficient condition for invertibility is $(-s | s) & t = t$ meaning that the invertibility equivalence $(-s | s) & t = t \iff \exists x. x \cdot s = t$ is valid for any bit-width $n$ for $x$, $s$ and $t$ [9]. Notice that this invertibility condition involves the operations &, | and −, and not · that occurs in the literal itself. Niemetz et al. [9] provide a total of 160 invertibility conditions covering several bit-vector operators for both equations and inequations. However, they were able to verify, using SMT solvers, the corresponding invertibility equivalences only for concrete bit-widths up to 65, given the reasoning limitations of SMT solvers mentioned earlier. A recent paper by Niemetz et al. [10] addresses this challenge by translating these invertibility equivalences into quantified formulas over the combined theory of non-linear integer arithmetic and uninterpreted functions — a theory supported by a number of SMT solvers. While partially successful, this approach failed to verify over a quarter of the invertibility equivalences. In this work, we approach the task of verifying the invertibility equivalences proposed in [9] by proving them interactively with the Coq proof assistant. We extend a rich Coq library for bit-vectors we developed in previous work [6] with additional operators and lemmas to facilitate the task of verifying invertibility equivalences for arbitrary bit-widths, and prove a representative subset of them. Our results offer evidence that proof assistants can support automated theorem provers in meta-verification tasks. Our Coq library models the theory of fixed-width bit-vectors adopted by the SMT-LIB 2 standard [1]. It represents bit-vectors as lists of Booleans. The bit-vector type is dependent on a positive integer that represents the length of the list. Underneath the dependent representation is a simply-typed or raw bit-vector type with a size function which is used to explicitly state facts on the length of the list. A functor translates an instance of a raw bit-vector along with specific information about its size into a dependently-typed bit-vector. For this work, we extended the library with the arithmetic right shift operation and the unsigned weak less-than and greater-than predicates and proved 18 invertibility equivalences. We initially proved these equivalences over raw bit-vectors and then used these proofs when proving the invertibility equivalences over dependent bit-vectors, as we explain in Section 4. The remainder of this paper is organized as follows. After some technical preliminaries in Section 2, we provide an overview of invertibility conditions for the theory of fixed-width bit-vectors in Section 3 and discuss previous attempts to verify them. Then, in Section 4, we describe the bit-vector Coq library and our current extensions to it. In Section 5, we outline how we used the extended library to prove the correctness of a representative subset of invertibility equivalences. We conclude in Section 6 with directions for future work. 2 Preliminaries We assume the usual terminology of many-sorted first-order logic with equality (see, e.g., [7] for more details). We denote equality by $=$, and use $x \neq y$ as an abbreviation for $\neg(x = y)$. The signature $\Sigma_{BV}$ of the SMT-LIB 2 theory of fixed-width bit-vectors includes a unique sort for each positive integer $n$, which we denote here by $\sigma_{[n]}$. For every positive integer $n$ and a bit-vector of width $n$, the signature includes a constant of sort $\sigma_{[n]}$ in $\Sigma_{BV}$ representing that bit-vector, which we denote as a binary string of length $n$. The function and predicate symbols of $\Sigma_{BV}$ are as described in the SMT-LIB 2 standard. Formulas of $\Sigma_{BV}$ are built from variables (sorted by the sorts $\sigma_{[n]}$), bit-vector constants, and the function and predicate symbols of $\Sigma_{BV}$, along with the usual logical connectives and quantifiers. We write $\psi[x_1, \ldots, x_n]$ to represent a formula whose free variables are from the set $\{x_1, \ldots, x_n\}$. 1 The SMT-LIB 2 theory is defined at http://www.smt-lib.org/theories.shtml. The semantics of $\Sigma_{BV}$-formulas is given by interpretations that extend a single many-sorted first-order structure so that the domain of every sort $\sigma_{[n]}$ is the set of bit-vectors of bit-width $n$, and the function and predicate symbols are interpreted as specified by the SMT-LIB 2 standard. A $\Sigma_{BV}$-formula is valid in the theory of fixed-width bit-vectors if it evaluates to true in every such interpretation. In what follows, we denote by $\Sigma_0$ the sub-signature of $\Sigma_{BV}$ containing the predicate symbols $<_{u}$, $>_u$, $\leq_u$, $\geq_u$ (corresponding to strong and weak unsigned comparisons between bit-vectors, respectively), as well as the function symbols $+$ (bit-vector addition), $\&$, $\mid$, $\sim$ (bit-wise conjunction, disjunction and negation), $-$ (2’s complement unary negation), and $\ll$, $\gg$ and $\gg_u$ (left shift, and logical and arithmetical right shifts). We also denote by $\Sigma_1$ the extension of $\Sigma_0$ with the predicate symbols $<_{s}$, $>_s$, $\leq_s$, and $\geq_s$ (corresponding to strong and weak signed comparisons between bit-vectors, respectively), as well as the function symbols $\sim$, $\&$, $\div$, mod (corresponding to subtraction, multiplication, division and remainder), and $\cdot$ (concatenation). We use 0 to represent the bit-vectors composed of all 0-bits. Its numerical or bit-vector interpretation should be clear from context. Using bit-wise negation $\sim$, we can express the bit-vectors composed of all 1-bits by $\sim 0$. ## 3 Invertibility Conditions And Their Verification Many applications rely on bit-precise reasoning and thus can be modeled using the SMT-LIB 2 theory of fixed-width bit-vectors. For certain applications, such as verification of safety properties for programs, quantifier-free reasoning is not enough, and the combination of bit-precise reasoning with the ability to handle quantifiers is needed. Niemetz et al. present a technique to solve quantified bit-vector formulas, which is based on invertibility conditions [9]. An invertibility condition for a variable $x$ in a $\Sigma_{BV}$-literal $\ell[x, s, t]$ is a formula $IC[s, t]$ such that $\forall s. \forall t. IC[s, t] \iff \exists x. \ell[x, s, t]$ is valid in the theory of fixed-width bit-vectors. For example, consider the bit-vector literal $x \& s = t$ where $x$, $s$ and $t$ are distinct variables of the same sort. The invertibility condition for $x$ given in [9] is $t \& s = t$. Niemetz et al. [9] define invertibility conditions for a representative set of literals $\ell$ having a single occurrence of $x$, that involve the bit-vector operators of $\Sigma_1$. The soundness of the technique proposed in that work relies on the correctness of the invertibility conditions. Every literal $\ell[x, s, t]$ and its corresponding invertibility condition $IC[s, t]$ induce the invertibility equivalence $$IC[s, t] \iff \exists x. \ell[x, s, t] \tag{1}$$ The correctness of invertibility equivalences should be verified for all possible sorts for the variables $x, s, t$ for which the condition is well sorted. More concretely, for the case where $x, s, t$ are all of sort $\sigma_{[n]}$, say, this means that one needs to prove, for all $n > 0$, the validity of $$\forall s : \sigma_{[n]}, \forall t : \sigma_{[n]}, IC[s, t] \iff \exists x : \sigma_{[n]}, \ell[x, s, t] \, .$$ This was done in Niemetz et al. [9] using an SMT solver but only for concrete values of $n$ from 1 to 65. A proof of Equation (1) that is parametric in the bit-width $n$ cannot be done with SMT solvers, since they currently only support the theory of fixed-width bit-vectors, where Equation (1) cannot even be expressed. To overcome this limitation, a later paper by Niemetz et al. [10] suggested a translation from bit-vector formulas with parametric bit-widths to the theory of (non-linear) integer arithmetic with uninterpreted functions. Thanks to this translation, the authors were able to verify, with the aid of SMT solvers for the theory of integer arithmetic with uninterpreted functions, the correctness of 110 out of 160 invertibility equivalences. None of the solvers used in that work were able to prove the remaining equivalences. For those, it then seems appropriate to use a proof-assistant, as this allows for more intervention by the user who can provide crucial intermediate steps. It goes without saying that even for the 110 invertibility equivalences that were proved, the level of confidence achieved by proving them in a proof-assistant such as Coq would be greater than a verification (without a verified formal proof) by an SMT solver. In the rest of this paper we describe our initial efforts and future plans for proving the invertibility equivalences, starting with those that were not proved in [10]. 4 The Coq Bit-vector Library In this section, we describe the Coq library we use and the extensions we developed with the goal of formalizing and proving invertibility equivalences. The original library was developed for SMTCoq [6], a Coq plugin that enables Coq to dispatch proofs to external proof-producing solvers. It is used to represent SMT-LIB 2 bit-vectors in Coq. Coq’s own library of bit-vectors [5] was an alternative, but it has only definitions and no lemmas. A more suitable substitute could have been the Bedrock Bit Vectors Library [3] or the SSRBit Library [2]. We chose the SMTCoq library mainly because it was explicitly developed to represent SMT-LIB 2 bit-vectors in Coq and comes with a rich set of lemmas relevant to proving the invertibility equivalences. The SMTCoq library contains both a simply-typed and dependently-typed theory of bit-vectors implemented as module types. The former, which we also refer to as a theory of raw bit-vectors, formalizes bit-vectors as Boolean lists while the latter defines a bit-vector as a Coq record, with its size as the parameter, made of two fields: a Boolean list and a coherence condition to ensure that the parameterized size is indeed the length of the given list. The library also implements a functor module from the simply-typed module to the dependently-typed module establishing a correspondence between the two theories. This way, one can first prove a bit-vector property in the context of the simply-typed theory and then map it to its corresponding dependently-typed one via the functor module. Note that while it is possible to define bit-vectors natively as a dependently-typed theory in Coq and prove their properties there, it would be cumbersome and unduly complex to do dependent pattern matching or case analysis over bit-vector instances because of the complications brought by unification in Coq (which is inherently undecidable). One can try to handle such complications as illustrated by Sozeau [12]. However, we found the two-theory approach of Ekici et al. [6] more convenient in practice for our purposes. The library adopts the little-endian notation for bit-vectors, thus following the internal representation of bit-vectors in SMT solvers such as CVC4. This makes arithmetic operations easier to perform since the least significant bit of a bit-vector is the head of the list representing it in the raw theory. Out of the 11 bit-vector operators and 10 predicates contained in $\Sigma_1$, the library had support for 8 operators and 6 predicates. The supported predicates, however, can be used to express the other 4. The predicate and function symbols that were not directly supported by the library were the weak inequalities $\leq_u$, $\geq_u$, $\leq_s$, $\geq_s$ and the operators $\gg_a$, $\div$, and mod. We extended the library with the operator $\gg_a$ and the predicates $\leq_a$ and $\geq_a$ and redefined $\ll$ and $\gg$, as explained in Section 5. We focused on invertibility conditions for literals of the form $x \diamond s \rhd t$ and $s \diamond x \rhd t$, where $x$, $s$, and $t$ are variables and $\diamond$ and $\rhd$ are respectively function and predicate symbols in $\Sigma_0 \cup \{=, \neq\}$ (invertibility conditions for such literals were found in [9] for the extended signature $\Sigma_1$). $\Sigma_0$ was chosen as a representative set because it seemed both expressive enough and feasible for proofs in Coq. Such literals, as well as their invertibility conditions, include only operators that are supported by the library (after its extension with $\gg_a$, $\leq_a$, and $\geq_a$). To demonstrate the intuition and various aspects of the extension of the library, we briefly describe Verifying Bit-vector Invertibility Conditions in Coq Fixpoint ule_list_big_endian (x y : list bool) := match x, y with | nil, nil => true | nil, _ => false | _, nil => false | xi :: x', yi :: y' => ((eqb xi yi) && (ule_list_big_endian x' y')) || ((negb xi) && yi) end. Definition ule_list (x y: list bool) := (ule_list_big_endian (rev x) (rev y)). Definition bv_ule (a b : bitvector) := if @size a =? @size b then ule_list a b else false. Figure 1: Definitions of ≤_u in Coq. the addition of ≤_u (the definition of ≥_u is similar). The relevant Coq definitions are provided in Figure 1.\(^2\) Like most other operators, ≤_u is defined in several layers. The function bv_ule, at the highest layer, ensures that comparisons are between bit-vectors of the same size and then calls ule_list. Since we want to compare bit-vectors starting from their most significant bits and the input lists start instead with the least significant bits (because of the little-endian encoding), ule_list first reverses the two lists. Then it calls ule_list_big_endian, which we consider to be at the lowest layer of the definition. ule_list_big_endian then does a lexicographical comparison of the two lists, starting from the most significant bits. To see why the addition of ≤_u to the library is useful, consider, for example, the following parametric lemma, stating that ¬0 is the largest unsigned bit-vector of its type: \[ \forall x : \sigma_{[n]}, x \leq_u \sim 0 \] (2) When not using this explicit operator, we usually rewrite it as: \[ \forall x : \sigma_{[n]}, x <_u \sim 0 \lor x = \sim 0 \] (3) In such cases, since the definitions of <_u and = have a similar structure to the one in Figure 1, we strip down the layers of <_u and = separately, whereas using ≤_u, we only do this once. Depending on the specific proof at hand, using ≤_u is sometimes more convenient for this reason. 5 Proving Invertibility Equivalences in Coq In this section we provide specific details about proving invertibility equivalences in Coq. In addition to the bit-vector library described in Section 4, in several proofs of invertibility equivalences we benefited from CoqHammer [4], a plug-in that aims at extending the automation in Coq by combining machine \(^2\)Both the library and the proofs of invertibility equivalences can be found at https://github.com/ekiciburak/bitvector/tree/pxt2019. It compiles with coqc-8.9.0. Theorem bvashr_ult2_rtl : forall (n : N), forall (s t : bitvector n), (exists (x : bitvector n), (bv_ult (bv_ashr_a s x) t = true)) -> (((bv_ult s t = true) / (bv_slt s (zeros n)) = false) / (bv_eq t (zeros n)) = false). Proof. intros n s t H. destruct H as ((x, Hx), H). destruct s as (s, Hs). destruct t as (t, Ht). unfold bv_ult, bv_slt, bv_ashr_a, bv_eq, bv in *. cbn in *. specialize (InvCond.bvashr_ult2_rtl n s t Hs Ht); intro STIC. rewrite Hs, Ht in STIC. apply STIC. now exists x. Qed. Figure 2: A proof of one direction of the invertibility equivalence for \(\gg_u\) and \(<_u\) using dependent types. learning and automated reasoning techniques in a similar fashion to what is done in Isabelle/HOL [11]. Note that one does not need to install CoqHammer in order to build the bit-vector library, since all the proof reconstruction tactics of CoqHammer are included in it. The natural representation of bit-vectors in Coq is the dependently-typed representation, and there- fore the invertibility equivalences are formulated using this representation. As discussed in Section 4, however, proofs in this representation are composed of proofs over simply-typed bit-vectors, which are easier to reason about. Some conversions between the different representations are then needed to lift a proof over raw bit-vectors to one over dependently-typed bit-vectors. For example, Figure 2 includes a proof of the following direction of the invertibility equivalence for \(\gg_u\) and \(<_u\): \[\forall s : \sigma_n. \forall t : \sigma_n. (\exists x : \sigma_n. s \gg_u x t) \Rightarrow ((s <_u t \lor -1(s <_u 0)) \land t \neq 0)\] \hspace{1cm} (4) In the proof, lines 6–9 transform the dependent bit-vectors from the goal and the hypotheses into simply- typed bit-vectors. Then, lines 10–12 invoke the corresponding lemma for simply-typed bit-vectors (called \texttt{InvCond.bvashr_ult2_rtl n s t Hs Ht}) along with some simplifications. Most of the effort in this project went into proving equivalences over raw bit-vectors. As an illustration, consider the following equivalence over \(<<_u\) and \(>_u\): \[\forall s : \sigma_n. \forall t : \sigma_n. (t <_u 0 \sim 0 <<_u s) \iff (\exists x : \sigma_n. x <<_u s >_u t)\] \hspace{1cm} (5) The left-to-right implication is easy to prove using \(\sim 0\) itself as the witness of the existential proof goal and considering the symmetry between \(>_u\) and \(<_u\). The proof of the right-to-left implication relies on the following lemma: \[\forall x : \sigma_n. \forall s : \sigma_n. (x <<_u s) \leq_u (\sim 0 <<_u s)\] \hspace{1cm} (6) From the right side of the equivalence in Equation (5), we get some \(x\) for which \(x <<_u s >_u t\) holds. Flipping the inequality, we have that \(t <_u x <<_u s\); using this, and transitivity over \(<_u\) and \(\leq_u\), Lemma 6 gives us the left side of the equivalence in Equation (5). As mentioned in Section 4, we have redefined the shift operators \(<_u\) and \(>_u\) in the library. This was instrumental, for example, in the proof of Equation (6). Figure 3 includes both the original and new definitions of \(<_<. The definitions of \(>_u\) are similar. Originally, \(<_<\) was defined using the \texttt{shl_one_bit} and the shl_n_bits functions. shl_one_bit shifts the bit-vector to the left by one bit and is repeatedly called by shl_n_bits to complete the shift. The new definition shl_n_bits_a uses mk_list_false which constructs the necessary list of 0s and appends (++ in Coq) it to the beginning of the list (because of the little-endian encoding); the bits to be shifted from the original bit-vector are retrieved using the firstn function, which is defined in the Coq library for lists. The nat type used in Figure 3 is the Coq representation of Peano natural numbers that has 0 and S as its two constructors — as depicted in the pattern match in lines 9 and 10. The theorem at the bottom of Figure 3 allows us to switch between the two definitions when needed. Function bv_shl defines the left shift operation using shl_n_bits whereas bv_shl_a does it using shl_n_bits_a. The new definition uses firstn and ++, over which many necessary properties are already proven in the standard library. This benefits us in manual proofs, and in calls to CoqHammer, since the latter is able to use lemmas from the imported libraries to prove the goals that are given to it. Using this representation, proving Equation (6) reduces to proving Lemmas bv_ule_1_firstn and bv_ule_pre_append, shown in Figure 4. The proof of bv_ule_pre_append benefited from the property app_comm_cons from the standard list library of Coq, while firstn_length_le was useful in reducing the goal of bv_ule_1_firstn to Coq’s equivalent of Equation (2). The statements of the properties mentioned from the standard library are also shown in Figure 4. mk_list_true creates a bit-vector that represents ∼0, of the length given to it as input, and bv_ule is the representation of ≤_u in the bit-vector library. bv_ule has output type bool (and so we equate terms in which it occurs to true), while the functions from the standard library have output type Prop. We also have two definitions for >>=_u, and a proof of their equivalence (as done for the other shift operators). Table 1 summarizes the results of proving invertibility equivalences for invertibility conditions in the signature Σ0. In the table, ✓ means that the invertibility equivalence was successfully verified in Coq but not in [10], while ✓ means the opposite; ✓ means that the invertibility equivalence was verified using both approaches, and X means that it was verified with neither. We successfully proved all invertibility Lemma bv_ule_1_firstn : forall (n : nat) (x : bitvector), (n < length x)%nat -> bnb ורק (firstn n x) (firstn n (mk_list_true (length x))) = true. Lemma bv_ule_pre_append : forall (x y z : bitvector), bv_ule x y = true -> bv_ule (z ++ x) (z ++ y) = true. Theorem app_comm_cons : forall (x y:list A) (a:A), a :: (x ++ y) = (a :: x) ++ y. Lemma firstn_length_le: forall l:list A, forall n:nat, n <= length l -> length (firstn n l) = n. Figure 4: Examples of lemmas used in proofs of invertibility equivalences. <table> <thead> <tr> <th>( \ell[x] )</th> <th>( = )</th> <th>( &lt;_u )</th> <th>( &gt;_u )</th> <th>( \leq_u )</th> <th>( \geq_u )</th> </tr> </thead> <tbody> <tr> <td>(-x \otimes t)</td> <td>✓</td> <td>✓</td> <td>✓</td> <td>✓</td> <td>✓</td> </tr> <tr> <td>(\sim x \otimes t)</td> <td>✓</td> <td>✓</td> <td>✓</td> <td>✓</td> <td>✓</td> </tr> <tr> <td>(x &amp; s \otimes t)</td> <td>✓</td> <td>✓</td> <td>✓</td> <td>✓</td> <td>✓</td> </tr> <tr> <td>(x</td> <td>s \otimes t)</td> <td>✓</td> <td>✓</td> <td>✓</td> <td>✓</td> </tr> <tr> <td>(x &lt;\ll s \otimes t)</td> <td>✓</td> <td>✓</td> <td>✓</td> <td>✓</td> <td>✓</td> </tr> <tr> <td>(s &lt;\ll x \otimes t)</td> <td>✓</td> <td>✓</td> <td>✓</td> <td>✓</td> <td>✓</td> </tr> <tr> <td>(x \gg s \otimes t)</td> <td>✓</td> <td>✓</td> <td>✓</td> <td>(\times)</td> <td>✓</td> </tr> <tr> <td>(s \gg x \otimes t)</td> <td>✓</td> <td>✓</td> <td>✓</td> <td>✓</td> <td>✓</td> </tr> <tr> <td>(x \gg a s \otimes t)</td> <td>✓</td> <td>✓</td> <td>✓</td> <td>✓</td> <td>✓</td> </tr> <tr> <td>(s \gg a x \otimes t)</td> <td>✓</td> <td>✓</td> <td>✓</td> <td>✓</td> <td>✓</td> </tr> <tr> <td>(x + s \otimes t)</td> <td>✓</td> <td>✓</td> <td>✓</td> <td>✓</td> <td>✓</td> </tr> </tbody> </table> Table 1: Proved invertibility equivalences in \(\Sigma_0\) where \(\otimes\) ranges over the given predicate symbols. equivalences over = that are expressible in \(\Sigma_0\), including 4 that were not proved in [10]. For the rest of the predicates, we focused only on the 8 invertibility equivalences that were not proved in [10], and succeeded in proving 7 of them. Overall, these results strictly improve the results of [10], as we were able to prove 11 additional invertibility equivalences in Coq. Taking into account our work together with [10], only one invertibility equivalence for the restricted signature is not fully proved yet, the one for the literal \(x \gg s >_u t\), although one direction of the equivalence, namely \(IC[s, t] \Rightarrow \exists x.\ell[x, s, t]\), was successfully proved both in Coq and in [10]. 6 Conclusion and Future Work We have described our work-in-progress on verifying bit-vector invertibility conditions in the Coq proof assistant, which required extending a bit-vector library in Coq. The most immediate direction for future work is proving more of the invertibility equivalences supported by the bit-vector library. In addition, we plan to extend the library so that it supports the full syntax in which invertibility conditions are expressed, namely $\Sigma_1$. We expect this to be useful also for verifying properties about bit-vectors in other applications. References
{"Source-Url": "http://export.arxiv.org/pdf/1908.09478", "len_cl100k_base": 7261, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 31300, "total-output-tokens": 8714, "length": "2e12", "weborganizer": {"__label__adult": 0.0004901885986328125, "__label__art_design": 0.0004892349243164062, "__label__crime_law": 0.0007338523864746094, "__label__education_jobs": 0.0010423660278320312, "__label__entertainment": 0.00014865398406982422, "__label__fashion_beauty": 0.00024580955505371094, "__label__finance_business": 0.000461578369140625, "__label__food_dining": 0.0006623268127441406, "__label__games": 0.0011911392211914062, "__label__hardware": 0.0015869140625, "__label__health": 0.0013551712036132812, "__label__history": 0.0004727840423583984, "__label__home_hobbies": 0.0001811981201171875, "__label__industrial": 0.0013275146484375, "__label__literature": 0.00042629241943359375, "__label__politics": 0.0006361007690429688, "__label__religion": 0.0008234977722167969, "__label__science_tech": 0.453125, "__label__social_life": 0.00015223026275634766, "__label__software": 0.00922393798828125, "__label__software_dev": 0.5234375, "__label__sports_fitness": 0.0004315376281738281, "__label__transportation": 0.0010709762573242188, "__label__travel": 0.0002639293670654297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30868, 0.01578]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30868, 0.26362]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30868, 0.87197]], "google_gemma-3-12b-it_contains_pii": [[0, 3756, false], [3756, 8111, null], [8111, 12339, null], [12339, 16638, null], [16638, 19048, null], [19048, 22281, null], [22281, 24732, null], [24732, 27574, null], [27574, 30868, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3756, true], [3756, 8111, null], [8111, 12339, null], [12339, 16638, null], [16638, 19048, null], [19048, 22281, null], [22281, 24732, null], [24732, 27574, null], [27574, 30868, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30868, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30868, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30868, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30868, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30868, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30868, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30868, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30868, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30868, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30868, null]], "pdf_page_numbers": [[0, 3756, 1], [3756, 8111, 2], [8111, 12339, 3], [12339, 16638, 4], [16638, 19048, 5], [19048, 22281, 6], [22281, 24732, 7], [24732, 27574, 8], [27574, 30868, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30868, 0.07602]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
47e401d1e89ea7da11b5109e82296e68dff4583e
The Micro-service Architecture Design Research of Financial Trading System based on Domain Engineering Xing Sheng\textsuperscript{a}, Shuangshuang Hu\textsuperscript{b}, Yihui Lu\textsuperscript{c} CFETS Information Technology (Shanghai) Co., Ltd, China \textsuperscript{a}shengxing\_zh@chinamoney.com.cn, \textsuperscript{b}hushuangshuang\_zh@chinamoney.com.cn, \textsuperscript{c}luyihui\_zh@chinamoney.com.cn Abstract. The purpose of this article is to propose a domain engineering-based micro-service reference architecture for financial trading platform, which can solve the problems of high complexity and high maintenance cost of a real large-scale financial trading system and try to provide a general solution for similar scenarios in the future. Starting from the concept of domain engineering and micro-service and the relationship between them, this article briefly introduces the background and pain points of the actual large-scale financial transaction system, emphatically expounds the principles and methods of domain analysis and micro-service disassembly for the legacy system, and puts forward a reference architecture with future reuse significance. The corresponding suggestions are also given for how the new micro-services will coexist with the existing legacy systems for a long time. In order to promote the landing of domain engineering and micro-service, besides theoretical research and necessary practice, enterprises also need to change the traditional management mode and introduce DevOps and other mechanisms, so that business processes and technologies can support the rapid development of business. Keywords: Domain Engineering, Micro-service, Legacy System, Reference Architecture. 1. Introduction With the development of information technology and the complication of enterprise business, the application of the system is becoming more and more complex, the code size is getting larger and larger, the maintenance and the costs of updating and difficulties are continuously improved. The development of technology and the expansion of business have brought new challenges to the deployment of the system and put forward new requirements for the efficiency and expansion of the enterprise application architecture [1]. Componentization [2] is an important tool for solving complexity problems. It divides a huge application into multiple components, each of which performs independent functions and works together. The componentized implementation of the original system is using the library, which can separate the development, compilation and testing, but cannot be completely divided according to the business function, the code of each business function is mixed together, the difficulty of debugging and deployment is larger. When a library or component is modified, you need to restart the application. With the emergence of micro-service architecture [3-6] and domain engineering [7-10] and its successful application at home and abroad, it provides new ideas and implementation solutions for the transformation of a single old system. In this paper, we will analyze the application system with domain engineering modeling, design the framework model with the common domain requirements, and provide theoretical basis for component division. Componentization through micro-services, to split applications into a single service, and reduce coupling between components. 2. Overview of Domain Engineering and Micro-service 2.1 Domain Engineering Domain is the area covered by a set of application systems with similar requirements and functions. Domain Engineering refers to the activity of collecting, organizing and preserving experience in a reusable form when constructing a new system or some parts of a system in a specific domain, and providing an adequate way to reuse these resources. It analyses lots of systems in the same field and... draws their common domain requirements, to design a framework model that meets the requirements. Finally, it develops and organizes reusable components based on the above framework model. In this way, when developing new applications in the same field, the requirements of new applications can be determined according to the domain model and the specific domain. Domain software architecture can be used to generate the design of new applications, on which basis, reusable components are selected for assembly, thus forming a new system. From the above definition, we can conclude that domain engineering can be roughly divided into three steps, domain analysis, domain design and domain implementation. Domain analysis is concerned with how to design a set of accurate, concise and correct real-world models to understand requirements in depth by creating models. The key point is to abstract the core features and put details into specific design processes. The model is the selective simplification and purposeful structuring of knowledge. Domain model is not only the knowledge in the domain expert's mind, but also the abstract knowledge which is strictly organized and carefully selected. In the field of software development and design by domain engineering method, the role of domain models cannot be ignored. First, the core of the model and design interacts. The close relationship between the model and the implementation ensures that the analysis we make in the model can be transformed into the final product. Secondly, model is the center of communication languages used by team members. Domain model can be used to promote the communication between developers and domain experts. In addition, the model is the concentrated knowledge, and the model is the way to organize domain knowledge and to distinguish the most important elements. Many complex projects are actually trying to use domain models, which are worthless if the entire programming or core parts do not correspond to the domain model. So, we must strictly guarantee the consistency between model and design within a certain range. Object oriented method provides modeling support for this model, and also provides a way to implement the model constructions. Domain design is the second stage of domain engineering, of which the main goal is to develop a corresponding design model for the problem domain and to express it explicitly. In the process of preliminary domain design, four parts of OOD [11] model should be designed, namely problem domain, human-computer interaction, control interfaces and data interfaces. The designs of these four parts have no certain order in time, and they can be interpolated according to the actual situation. The information sources of this activity mainly include domain requirements definitions, object-oriented analysis models and design of existing systems in this domain. Systems in the same field are often similar in terms of implementation. The task of preliminary domain design is to establish a basic OOD model and grasp the commonalities of these implementations. Since existing systems are developed in their own specific environments, there may be two or more different solutions to the same problem, in which situation if there is a solution that can adapt to the variability of the systems in the field, and there are no obvious side effects (such as reduced efficiency or impact on other parts of the design) when applied to various situations, then this solution should be adopted. Otherwise, multiple solutions should be chosen to make them suitable separately. The main activity of domain implementation is to use appropriate technology and language to implement product architecture and components in domain design. Domain Specific Language (DSL) [12] is a specific problem-oriented language. After implementing domain architecture and components, DSL can be used to generate specific software products. Which can be specific source programs or compiled software modules. In the fourth part of this paper, we will elaborate on the whole process from domain analysis to domain realization based on specific examples. ### 2.2 Micro-service The so-called micro-service is to divide the functions in the application as fine as possible and treat each small function as a separate service. These tiny services usually use HTTP API to communicate, and they focus on specific business functions, have strong module boundaries, and can be deployed independently. When we talk about micro-service, people always compare it with SOA (Service Oriented Architecture). Strictly speaking, we can consider micro-service as a subset of SOA. In the late 1990s, SOA [13] first proposed the idea of using low-coupling and service-oriented processes in software architecture design. However, in the architecture of SOA, the complex ESB enterprise service bus [14] is still in a very important position. The architecture of the whole system has not been fully componentized and service-oriented, and its learning and using threshold is still high. The idea of micro-service architecture originates from the horizontal and vertical cutting and splitting of business functions and modules in project design. In 2012, the structure of micro-service was proposed, and since then many design cases have come into being. In the following years, Amazon, Uber and other enterprises carried out their own practices and all of them achieved successes. Micro-service emphasizes completely component-based and service-oriented. All micro-services are independent, and they are exposed to callers in the form of RESTAPI externally. Micro-service is essentially a design style of software architecture. The whole software service architecture is composed of multiple micro-services. It does not have certain rules and needs to be designed according to business requirements. The characteristics of micro-service architecture can be roughly summarized as following three points. Complexity controllable: Micro-service can decompose applications into manageable branches or services. Through the micro-service architecture model, complex functions can be presented in a modular way, which makes it easier to develop and maintain a single service. Flexible and scalable: The micro-service architecture enables each service to expand independently, and each service can add or subtract functions independently, making the whole system very flexible. Independent deployment: Micro-services have independent running processes, so each micro-service can also be deployed independently. Using the same deployment environment enables batch rapid deployment of micro-services. Compared with the traditional single application architecture, micro-service has obvious advantages in many aspects. 2.2.1 Heterogeneity Problems are often concrete, and the solutions should be targeted. The heterogeneity of micro-service can help developers select different technical solutions according to different business characteristics and solve specific business problems pertinently. As for the heterogeneity of micro-service, we will elaborate in detail in Chapter 4 how we introduce micro-service into legacy systems to make new micro-services coexist with existing systems for a long time. 2.2.2 Independent Test and Deployment Under the micro-service architecture, packaging, testing and deployment of different services are completely independent. From this point of view, the cost and risk of code modification, testing, packaging and deployment are much lower than that of single application architecture. 2.2.3 On-demand Expansion Due to the limitation of single process, single application architecture can only be extended horizontally based on the whole system, and it cannot be extended on demand for a specific functional module. The micro-service architecture can perfectly solve the scalability problem. The system can be extended as required. 2.2.4 Error Isolation The micro-service architecture can also enhance the isolation of errors or faults. For example, if a certain service leaks memory, it will only affect itself, and other services can continue working properly. In contrast, if an unqualified component is abnormal in a single application architecture, it may drag down the whole system. 2.2.5 Full Functionalization of the Team The traditional development model usually takes technology as the unit of division of labor, such as UI team, server team and database team. Micro-service advocates a division of labor according to services. Team members need all the skills to design, develop, test and deploy services. In summary, the advantages of the micro-service architecture are obvious. However, no software architecture is perfect. Micro-service architecture also has its shortcoming. Firstly, the design of micro-service architecture does not make all operations service-oriented and componentized. For example, some underlying operations at the database level are not recommended to be service-oriented. The architecture designer needs to make a reasonable division according to the specific circumstances of the business. In addition, the purpose of the micro-service architecture is to solve the efficiency problems in software development and iteration. Because the service functions that should be used internally are exposed in the form of API, it is necessary to add several times of internal data transmissions based on HTTP protocol in one external data service process of software platform. Although the transmissions are completed in the intranet, the calls to the function interfaces in memory are much slower, which is a big challenge for the design and deployment of software platform server when the amount of service increases in later periods. Finally, due to the exposure of APIs in the intranet, designers also need to authenticate key micro-services to ensure that some sensitive micro-services are not abused. We briefly introduced the concepts and implementation principles of domain engineering and micro-service. What is the relationship between domain engineering and micro-service? How can we apply domain engineering modeling ideas and methods to provide theoretical basis and technical support for the separation of micro-services? As we all know, the key to designing micro applications and micro-services is to decompose the architecture. According to the architecture decomposition model, software architecture can be decomposed from four dimensions, business domain, function domain, technology domain and public domain. The application architecture based on micro-service mainly involves three dimensions, business domain decomposition, functional domain decomposition and technology domain decomposition. Business domain decomposition decomposes applications into subsystems, functional domain decomposition decomposes subsystems into sub-modules, and technology domain decomposition divides sub-modules into frontend micro-applications, backend micro-services and sub-databases. These three decompositions are iterative processes, from coarse to fine and from fuzziness to clarity. The division of business domain and function domain can be based on the idea of domain engineering to analyze and design the domain of single application, so as to lay the foundation for the subsequent micro-service transformation. In the follow-up part of this article, we will focus on the domain engineering-based micro-service decomposition, while we will not focus on the technical framework and details of micro-service implementation. 3. The Introduction of RMB Trading System The Interbank RMB Trading System (hereinafter referred to as the RMB Trading System), which is currently in operation, is launched in 2009. It is a complex and huge trading system with the following problems: High complexity: The current RMB Trading System has millions of lines of codes, including a lot of intersystem interfaces and functional modules. Some functional modules have unclear dependencies or blurred boundaries, and the code style and quality are uneven, resulting in incomplete assessments may be made when making new requirements or modifying defects, even a small change may result in unpredictable defects. Poor reliability: Due to the high coupling of each functional module, small defects in one functional module may cause other functional modules to be unavailable, and the system may not be used. Legacy technical debts: The RMB Trading System has been in operation for more than ten years. Over the time, changes in demands and the developers change, the technical debts of the entire system have accumulated more and more. Many developers believe that there is no defect to be modified, the revised idea has made the current system more difficult in design and code modification. If encounter new requirements that are significantly different from existing implementations, the cost of modifying the code is very high, resulting in having to sacrifice some of the requirements so that the system changes are within the controllable range, causing the business team to be dissatisfied with the development team. Difficulties in deployment: With the increment of codes, the construction and deployment time of the RMB Trading System has also increased. At present, it takes about half an hour to compile all in the background, and it takes about one hour for the client to compile once. Every time the client needs change or defect repair needs to re-release the client, this full-scale release method takes a long time and has a large impact range, which inevitably leads to higher risks, so that it cannot respond to business needs quickly. Limited expansion capability: At present, the RMB Trading System cannot support horizontal expansion according to business needs, and business processing can easily reach bottlenecks. In addition to logical optimization, it is difficult to expand from the architectural level, resulting in a worse user experience. As mentioned earlier, micro-services are small services that consist of a single application. They have their own processes and lightweight processing. The services are designed according to business functions, deployed in a fully automated manner, and communicated with other services using the HTTP API. At the same time, the service uses the smallest scale of centralized management (such as Docker) capabilities, services can be implemented in different programming languages and databases. The complex and controllable, flexible expansion and independent deployment of micro-services can solve various problems in the current RMB Trading System. 4. The Solution based on Domain Engineering and Micro-service 4.1 The Overall Principles and Methodology for Reforming To retrofit legacy systems based on the architectural philosophy of micro-services, we can follow these steps: Firstly, analyze the functional modules included in the legacy system, extract the common parts, and subdivide each functional module into small functional points as small as possible; Secondly, extract the various function points obtained by analysis and extract the independent modules; Third, strip out the business data, try to make the business data of each micro service independent of each other, isolated from each other without affecting; Fourth, analyze and determine all the functional points that need to be implemented, and determine the business process and development technology selection of each service; Fifth, according to the technical characteristics of micro-services and the business needs of large platforms for summary design and detailed design; Sixth, rapid development and testing of code; Seventh, the container is used to encapsulate the resource environment used by the micro-services. The same container can be directly deployed. 4.2 The Typical Reference Architectures for Legacy Financial Trade System based on Micro-Service Transformation 4.2.1 Aggregator Micro-service Design The aggregator [15] calls multiple services to implement the functionality required by the application. It can be a simple web page that processes the retrieved data. It can also be a higher-level combined micro-service that adds business logic to the retrieved data and then publishes it into a new micro-service, which is consistent with the DRY principle. In addition, each service has its own cache and database. If the aggregator is a composite service, then it also has its own cache and database. The aggregator can be independently expanded along the X and Z axes. **Fig.1 Aggregator micro-service design** ### 4.2.2 Proxy Micro-service Design Proxy micro-service design [16] is a variant of aggregator micro-services design. In this case, the client does not aggregate data, but calls different micro-services based on the difference in business requirements. The proxy can only delegate requests or perform data conversion tasks. **Fig.2 Proxy micro-service design** ### 4.2.3 Chained Micro-service Design Service A will communicate with Service B upon receiving the request. Similarly, Service B will communicate with Service C. All services use synchronous messaging. The client will block until the entire chained call is completed. Therefore, the service call chain should not be too long to avoid waiting for the client for a long time [17]. 4.2.4 Asynchronous Messaging Micro-Service Design Although the REST design pattern is very popular, it is synchronous and can cause blocking. Therefore, some micro-service-based architectures may choose to use message queues instead of REST requests/responses [15]. 4.3 Micro-service Dismantling Principle Based on the principles and methods of the overall transformation of a typical financial transaction legacy system, the principles used in the use of micro-services disassembly are as follows: Single responsibility, to achieve high cohesion and low coupling, the micro-services should define moderate granularity to avoid circular dependence and two-way dependence. To be based on domain analysis and design, each service has a clear scope of responsibilities and boundaries. Evolutionary split can adapt to the rapid iteration of the version. The consistent interface and data flow can eliminate duplicate data, create a clear recording system and a consistent integration interface. Cut with the business model and fully consider the independence and professionalism of the business. Business needs need to be considered when designing and decoupling the architecture. In the service dismantling, taking the business perspective first, fully considering the independence and professionalism of the business, reasonably divide the boundaries according to the business functions of the service. Consider the team structure, insist on dismantling the benefits, and not increase the maintenance cost for disassembly. The measure of the dismantling benefit is that the system maintenance cost after disassembly is lower than the maintenance cost of the system before disassembly. The so-called maintenance costs include manpower, material resources and time. Considering that the old and new systems need to be run in parallel during the transformation, it is not possible to increase the manpower and the increase in the capacity requirements of the personnel due to the dismantling of the micro-services, which may lead to a decrease in the proportion of input and output. Consider compatibility, and the new and old systems will smoothly transition. When using the new technology, the micro-service transformation of the function is performed, the interface should be as transparent as possible, and the API changes should be transparent to the user. In short, when we are dismantling micro-services, we must adhere to the principles of business-oriented, road-to-simplification, and divide and conquer, and fully consider single responsibility, service granularity, business needs, dismantling benefits, and version compatibility, not just from a technical point of view and is assemble the service into many small modules. Based on the above disassembly principle, we adopt the AKF extension principle [18]. The AKF Extended Cube (The Art of Scalability) is the three dimensions of the application extension of the abstract summary of AKF's technical experts. In theory, according to this extended mode, the single system can be expanded indefinitely and is highly scalable. X-axis: refers to horizontal replication, in short, single-system multi-instance operation, and the cluster adopts load balancing mode. Z-axis: refers to similar data partitioning. When a trading product belongs to a high-frequency trading product, the data is partitioned according to the transaction heat, and the data processing volume of each partition is relatively balanced, thereby ensuring the real-time data processing. Y-axis: refers to the split mode of the micro-service, based on different business splits. The advantages of using this splitting principle are: 1. Low coupling, high cohesion: one service is only used to complete a single independent function; 2. It can achieve maintenance by team... structure and small-scale team to achieve fast iteration. The service characteristics of each service are different, independently maintained, and support unlimited expansion. 4.4 Domain Entity and Object Definition, System Splitting Based on the old financial transaction legacy system, domain analysis is carried out from the perspective of demand. First, we need to define domain entities and domain objects. A domain entity is a domain concept that needs to be uniquely identified in the domain. In other words, it refers to an object that is usually uniquely identified and distinguished in the business and needs to be continuously tracked for it. An object is an entity if it maintains continuity throughout its lifecycle and is independent of its attributes (even if these attributes are important to system users). A domain object can be thought of as an attribute of an entity. The fundamental difference between an entity and an object is that the object only needs to know what it is. The entity not only needs to know what it is, but also needs to know which one it is. A domain consists of multiple subdomains, each of which corresponds to a different component of the business. We divide the subdomains into three categories: Core sub-domain: The core distinguishing point of the business, the most valuable part of the application, needs to reflect the unique competitiveness of the system. Universal sub-domain: not specific to the business or does not have personalized appeals. Support subdomain: It can be understood as different from the existence of the above two subdomains. Based on the above definition of some concepts in the field, we can design the transaction model as follows: Core sub-domain: quotation, transaction, order; Universal subdomain: user login, permission verification; Support subdomain: calculation, market; 4.5 Coexistence of New Micro-services and Existing Systems In the process of disassembly, we can ensure the normal operation of the system and maximize the smooth transition between the old and new architectures according to the following principles. 4.5.1 Proceed in an Orderly Way and Step by Step Through the way of "function modules stripping + a small amount of interface code modifications", the gradual stripping of system functions is realized. The order of stripping follows the principle of "easy before difficult, from outside to inside". Starting with the function modules of system peripheral and supporting auxiliary functions, it gradually goes deep into the core business function modules. In our case, we give priority to the transformation of the maintenance and query module of the trading system, because these functional modules are relatively independent, and even if the transformations are difficult or functional problems occur, it will not have a significant impact on the core functions of the system. 4.5.2 Coexistence of New and Old Based on the idea of "step by step", we abandon the previous system upgrade method of "marking a certain time point and switching between old and new systems" and adopt the concept of "continuous release" to maintain the coexistence of old and new systems in the process of transformation. That is to say, after each new service is stripped off the line, the frontend proxy is used to transfer the relevant functional requests to achieve transparent service switching to users. 4.5.3 Data Redundancy and Synchronization Under the micro-service architecture, each service should have its own independent data storage structure. Therefore, in the process of service disassembly, on one hand, data need to be decoupled to achieve functional decoupling, and on the other, data redundancy can be used to deal with the data that is difficult to decouple completely. According to the real-time requirements of data, data synchronization tools can be used to synchronize relevant data to the new service node. 5. Summary and the Future Work Starting from the concepts of domain engineering and micro-service, this article proposes a design idea of micro-service based on domain engineering and tries to apply it to the transformation of legacy system of monolithic architecture. It is relatively easy to create a system with a micro-service architecture from scratch, because there are no constraints in the existing system. How to dismantle and gradually transform the legacy system and how to ensure the smooth coexistence of the new micro-services and legacy system to the greatest extent is the focus of this article. It is also a valuable wealth that we think can provide guidance and reference for the follow-up similar legacy system transformations. In the fourth chapter of this article, we introduce a typical micro-service reference architecture of financial transaction legacy system, which also provides more ideas for the future design of similar business systems. From the technical point of view, this article explores and demonstrates the application architecture based on the micro-service architecture and focuses on the methodology and reference architecture of the existing system transformation. At present, the micro-service transformation of the financial transaction legacy system introduced in this article is still in progress. We will continue to summarize and refine the transformation, analyze the problems encountered in the micro-service implementation level, and research and implement the development, deployment, routing and monitoring tools needed for micro-applications and micro-services. In addition, the micro-service architecture brings not only technological changes, but also changes in management methods. To better apply the micro-service architecture, enterprises need to change the traditional management mode, among which DevOps [19] is a management concept that complements the micro-service technology. By introducing DevOps mechanism, the application deployment package constructed in the development phase is made into a container image and serves as a delivery through the subsequent testing and implementation phases. Due to the portability of the container, the same application operating environment can be cloned quickly in the testing and production environments to realize the integration of development, operation and maintenance. Like the enterprise organization structure and delivery process, we need to carry out drastic reform of the existing mechanism. Once the process and technology innovations are completed, the cost of future system construction will be greatly reduced, and system quality will be greatly improved. The ability of the enterprise to embrace changes in business will also gain a qualitative leap. References
{"Source-Url": "https://download.atlantis-press.com/article/55913117.pdf", "len_cl100k_base": 5837, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 23870, "total-output-tokens": 7442, "length": "2e12", "weborganizer": {"__label__adult": 0.0004520416259765625, "__label__art_design": 0.0009694099426269532, "__label__crime_law": 0.0003542900085449219, "__label__education_jobs": 0.0007352828979492188, "__label__entertainment": 9.447336196899414e-05, "__label__fashion_beauty": 0.0002112388610839844, "__label__finance_business": 0.0024394989013671875, "__label__food_dining": 0.0004405975341796875, "__label__games": 0.0007376670837402344, "__label__hardware": 0.0010957717895507812, "__label__health": 0.0005521774291992188, "__label__history": 0.0003476142883300781, "__label__home_hobbies": 8.946657180786133e-05, "__label__industrial": 0.0005950927734375, "__label__literature": 0.0002722740173339844, "__label__politics": 0.00039076805114746094, "__label__religion": 0.000438690185546875, "__label__science_tech": 0.0213775634765625, "__label__social_life": 6.431341171264648e-05, "__label__software": 0.0068206787109375, "__label__software_dev": 0.96044921875, "__label__sports_fitness": 0.00032973289489746094, "__label__transportation": 0.000713348388671875, "__label__travel": 0.0002391338348388672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35160, 0.02054]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35160, 0.35969]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35160, 0.91377]], "google_gemma-3-12b-it_contains_pii": [[0, 3899, false], [3899, 8399, null], [8399, 12217, null], [12217, 16367, null], [16367, 20237, null], [20237, 21300, null], [21300, 22086, null], [22086, 25111, null], [25111, 29046, null], [29046, 33140, null], [33140, 35160, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3899, true], [3899, 8399, null], [8399, 12217, null], [12217, 16367, null], [16367, 20237, null], [20237, 21300, null], [21300, 22086, null], [22086, 25111, null], [25111, 29046, null], [29046, 33140, null], [33140, 35160, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35160, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35160, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35160, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35160, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35160, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35160, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35160, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35160, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35160, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35160, null]], "pdf_page_numbers": [[0, 3899, 1], [3899, 8399, 2], [8399, 12217, 3], [12217, 16367, 4], [16367, 20237, 5], [20237, 21300, 6], [21300, 22086, 7], [22086, 25111, 8], [25111, 29046, 9], [29046, 33140, 10], [33140, 35160, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35160, 0.0]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
ea290be7234843962517d1a9170f8d3b1b91a7ad
D3PART: A new Model for Redistribution and Plasticity of 3D User Interfaces Jérémy Lacoche, Thierry Duval, Bruno Arnaldi, Éric Maisel, Jérôme Royan To cite this version: HAL Id: hal-01293037 https://hal.archives-ouvertes.fr/hal-01293037 Submitted on 24 Mar 2016 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. D3PART: A new Model for Redistribution and Plasticity of 3D User Interfaces Jérémy Lacoche* IRT b<>com UMR CNRS 6074 Irisa - Inria Rennes Thierry Duval† UMR CNRS 6285 Lab-STICCC Telecom Bretagne IRT b<>com Bruno Arnaldi‡ UMR CNRS 6074 Irisa - Inria Rennes INSA de Rennes IRT b<>com Eric Maisel§ UMR CNRS 6285 Lab-STICCC ENIB IRT b<>com Jérôme Royan¶ IRT b<>com ABSTRACT In this paper we propose D3PART (Dynamic 3D Plastic And Redistributable Technology), a model to handle redistribution for 3D user interfaces. Redistribution consists in changing the components distribution of an interactive system across different dimensions such as platform, display and user. We extend previous plasticity models with redistribution capabilities, which lets developers create applications where 3D content and interaction tasks can be automatically redistributed across the different dimensions at runtime. Keywords: Plasticity, Redistribution, 3D User Interfaces Index Terms: H.5.1 [Information interfaces and presentation]: Multimedia Information Systems—Artificial, augmented, and virtual realities; H.5.2 [Information interfaces and presentation]: User Interfaces—Graphical user interfaces (GUI) 1 INTRODUCTION AND RELATED WORK Today, users have access to a wide variety of platforms such as mobile devices, desktop computers and immersive systems. Therefore, users are more frequently confronted with situations where they have to move from one platform to another [7]. Moreover, combining different platforms can give new interactions prospects to users. These possibilities directly refer to “distributed user interfaces” (DUI) and redistribution. A DUI is a user interface whose components are distributed across different dimensions [8]. For 3D user interfaces we consider three dimensions of distribution from the ones described in [8] and [15]: • Display. The application content is displayed on one or multiple devices. Common examples in 3D for this kind of distribution are multiple display systems. • Platforms. The application runs on a single computing platform or is distributed across multiple ones. These platforms may be homogeneous or heterogeneous (operating system, computing power, plugged devices). For instance, cluster approaches combine connected homogeneous computers to run a VR application with high performances. • Users. The application is shared by multiple users. This dimension is directly linked to the two other ones as the different participants can use different displays and platforms. In 3D, this dimension refers to Collaborative Virtual Environments (CVE). Redistribution consists in changing the distribution of an interactive system on these dimensions. It can be system-initiated, user-initiated, or mixed-initiated [7]. Redistribution can be performed at runtime or between sessions and its granularity may vary from application to pixel level [4]: • At application level, on the platform or user dimension, the application is fully replicated or fully migrated on a distant platform. The application may be adapted to its new context of use. Full replication implies state synchronization to maintain consistency between the different instances of the application, while for a full migration no synchronization is performed. • At workspace level, workspaces can be redistributed on the three dimensions. A workspace is an interaction space that groups together interactors that support the execution of a set of logically connected tasks. For instance, the painter metaphor [16] includes two workspaces: the palettes of tools on a mobile device and the drawing area on an electronic white board. • At domain concept level, physical interactors can be redistributed on the different dimensions. In 3D, it corresponds to the interaction techniques and widgets. In [14], physical interactors for navigation, pointing and application control are distributed on a tablet in order to interact in an immersive system. • At pixel level, view continuity is ensured across different displays with a distribution on the display and the platform dimensions. For instance, an application can be distributed on a cluster of PCs and rendered on multiple displays with view continuity. Redistribution is one mean of adaptation addressed by plasticity which is the capacity of an interactive system to withstand variations of both the system physical characteristics and the environment while preserving its usability [18]. The second mean of adaptation addressed with plasticity is recasting which consists in modifying locally the application components in order to fit a given context of use, such as interaction techniques adaptations or content presentation modifications. Recasting is needed to handle redistribution, because input and output capacities variations from a platform to another one imply local adaptations of the redistributed components. In 3D, solutions exist for the creation of reconfigurable applications [9], adaptive ones [13] and recent approaches tend to bring plasticity to 3D with a focus on recasting [12]. In order to handle redistribution, most of the solutions are designed for 2D user interfaces, such as the 4C reference framework. Figure 1: D3PART extends our previous plasticity models [12] by integrating an adaptation and a redistribution process. [7], the peer-to-peer architecture proposed by Melchior et al. [15], the PolyChrome framework [2], or the ZOIL framework [19]. Solutions to create distributed 3D user interfaces also exist but they mainly focus on specific cases and do not let the end-user change the system distribution at runtime. One specific case handled in 3D is the case of clusters of computers that manage multiple display systems such as CAVEs [6] or Workbenches. VR Juggler [3] and MiddleVR\(^1\) propose such solutions. The second specific case handled in 3D is the field of CVE which needs a distribution at the platform and user levels. It implies a state synchronization between the different users platforms in order to maintain a consistent application. Some architectures for CVE are reported in [10]. Our contribution is D3PART (Dynamic 3D Plastic And Redistributable Technology), a new model for developers to help them in the creation of 3D user interfaces that can be dynamically redistributed across different dimensions: platform, user and display. The model includes an adaptation process and a redistribution process for the creation of plastic 3D applications. We focus on redistribution at application, workspace, and domain concept levels. Pixel level on clusters of PCs is not covered. We present one scenario of redistribution where we combine a tablet and an immersive system for a furniture planning application. This prototype is developed with a toolkit that implements the D3PART model. 2 APPLICATION MODEL AND DYNAMIC RECASTING As shown in Figure 1, in order to design 3D applications that handle plasticity, recasting and redistribution, D3PART extends our previous plasticity models [12]. First, this previous work introduces a device model for the description of any platform. This device model describes precisely all the devices that can be used for interaction purposes at runtime. It includes device capabilities, limitations and representations in the real world. Second, it introduces a model for developing concrete application components independently from any 3D framework or 3D devices. These components are deployed at runtime to achieve high-level interaction tasks which are also represented in a model. For 3D user interfaces, according to Hand [11], these tasks belong to three categories: selection and manipulation, application control, and navigation. For instance, an application component can correspond to an interaction technique or a 3D widget. This model is a modification of PAF [5] and ARCH [1] models. It divides a component into five facets that decouple its features: - The Abstraction describes the semantics of the component and the function it can perform. - The Rendering Presentation facet is the only facet depending on a 3D framework. It handles graphics output and physics. In our examples these facets are developed with Unity3D\(^2\). - The Logical Driver handles devices management. It can implement how an interaction technique is controlled according to a set of abstract interaction devices. In this facet, the developer describes all required inputs and outputs units according to a set of parameters taken from the device model. \(^1\)http://www.middlevr.com/middlevr-adk/ \(^2\)https://www.unity3d.com/ Figure 2: With D3PART, an application is described by a virtual environment and high level tasks. Compatible application components are deployed to achieve the tasks according to the encountered context of use. - The Control ensures the consistency between the rendering presentation, the logical driver and the abstraction. - The Supervision Control receives the context modifications at runtime and then is able to determine if a logical driver is still possible. It contains all logical driver and rendering presentation types compatible with the application component. D3PART uses these models to describe an application and the context of use. As shown in Figure 2 we define an application with a set of high level interaction tasks and with a description of the virtual environment. First, the application developer chooses a set of tasks to represent at a high level the application behavior and possibilities. Dependencies between the tasks can be described by the developer. For instance, an application control task with a menu will be dependent on a selection task. In our implementation, these needed tasks and the dependencies must be provided by the application developer or the designer in an XML configuration file. New tasks can be implemented by a developer and added to the list of possible ones. A task can define different functions (the task events) that constitute the application logic such as adding an object into the scene or loading a new scene configuration, etc. A task also exposes a list of compatible application components that can be deployed to achieve it. This list is also edited in an XML file. These components must be implemented with the application component model described in [12]. This previous work gives examples of possible tasks, application components and logical drivers. Second, the application is described with its virtual environment. The virtual environment is composed of visual (3D content) and sound assets. Its edition is separated from the tasks. It can be edited separately, for instance in a game engine editor, or loaded with an X3D file depending on the implementation of the models used. In our case, as said, we use an implementation based on Unity3D. The application is launched on a platform described with the device model previously introduced. Each device corresponds to a class that inherits from the basic device class. In this class, with the device SDK, the developer has to complete some functions to fulfill the input data, trigger the outputs and tell the system when a new instance of the device is plugged or unplugged. At runtime, high level tasks are automatically associated with concrete application components according to the encountered context of use. For these components, the rendering presentation facet and the logical driver facet are also chosen according to the context. The control facet and the abstraction one do not depend to this context. The association is performed with an automatic adaptation process included in D3PART on top of the device and task models in order to support dynamic recasting. The association is made with a scoring system that takes into account the platform capabilities and the list of compatible components exposed by each task. Its goal is to maximize the usability of the application. We won’t give a full description of this scoring mechanism because it is not the scope of this paper. The association process is performed at each context change in order to detect not only new usable application components or more adapted ones. It can be described as follows: The optimal usability of the application is always ensured whatever the different platforms can register. For now, this feature is implemented with the network capabilities of the target 3D framework. It is integrated into the rendering presentation facet. As future work, this mechanism could become independent of the 3D framework and be implemented in the abstraction facet. For now, our implementation does not show apparent latency but being independent from the 3D framework would let us optimize the network load. As proposed in the 4C reference framework [7], this component implements an integrated user interface for platform registration and control redistribution process: the meta-user interface. In our case, the redistribution is performed at runtime and is user-initiated: the meta-user interface is proposed to the end-user of the application. It can be shown and hidden at runtime with a graphical button or a device button depending on the context of use. The redistribution process is performed in four steps as shown in Figure 3. The first step consists in connecting to the redistribution server. The IP address of the server can be given in the meta-user interface or in the XML task configuration file. This step must be performed on the current used platform and on each platform that must be available for redistribution. On the distant platforms, an empty application runs. It contains the framework that implements the D3PART model and it declares the redistribution task as needed. The second step consists in configuring the desired redistribution with the meta-user interface. First, the user chooses the platform on which the application will be redistributed from a list of available ones. In our case, the basis of the redistribution process is made on the platform dimension. However, as each platform may manage another display and may be used by another person, user and display dimensions can also be targeted. Then, the user configures the high level tasks distribution across the two platforms. As shown in Figure 4, multiple choices are given to the user in the menu: - Full migration: all tasks migrate. Each platform runs an independent version of the application. It can be performed when the user wants to switch to another platform. - Partial migration: the user chooses which task(s) will migrate to the distant platform. The application is distributed and shared between the two platforms. It can be performed to combine different platforms. - Partial replication: the user replicates some tasks to the distant platform. He will be able to perform these tasks on the two platforms within the same shared application. - Full replication: all tasks are replicated and can be performed on different platforms in the same shared application. This kind of redistribution can be used to start a collaboration with a user on a different platform. Dependent tasks have to be redistributed together. Therefore, they are grouped into the menu as shown in Figure 4. In the meta-user interface we associate a warning icon to a task if it cannot be performed on the distant platform. To do so, we ask the distant platform if an application component can be deployed for each task according to the platform capabilities. The goal of this feature is to warn the end user that the application can be degraded if this task is redistributed. On the other platform, thanks to adaptation process included in D3PART, an adapted application component is automatically associated with each redistributed task. In the third step we replicate the virtual environment to the distant platform. The goal is to keep the application state during the redistribution to the target platform. It includes 3D meshes, their materials, and sound assets. To do so, we consider three solutions: - Assets are known in the distant platform. Only the names are transmitted. This is the currently implemented solution. - Assets are not known but can be downloaded from a distant server. In this case, URLs are provided. - Assets are unknown. For instance when a user is editing a new 3D content. Here, assets can be streamed over the network. In the last step we synchronize the different platforms. As for CVEs, a synchronization is performed in order to keep a consistent state between the instances of the same application running on different platforms. In case of full migration, no synchronization is performed because each platform runs an independent version. First, the 3D objects transforms are synchronized in order to maintain a consistency between the different 3D worlds. Second, tasks events are also synchronized. The events constitute the application logic and have to be synchronously performed on each application instance. To do so, we use an observer design pattern. The redistribution component observes all task events. When one event is triggered, it is transmitted with its parameters through the network as text messages in order to be triggered distantly. During a full replication, a collaborative context of use can be created. To handle concurrency when moving objects, the priority to move an object is given to the first user who grabs it. Other users cannot move an object until the first user has released it. Other mechanisms could be integrated as well. We also provide awareness about the activity of the distant user, for now we only display the view frustums of each user but avatars and hands could be added too. 4 Redistribution for Platforms Combination The implementation of the D3PART model have been used to develop a furniture planning application. Its goal is to help people to plan the use of particular premises. Here, we demonstrate how two different platforms can be combined to interact in this application thanks to the D3PART model. The application is composed of three tasks. First, a navigation task is needed in order to navigate within the room. Second, we need an application control task for adding furniture into the room with the help of a menu. The add of an object is defined as an event into the task. Last, we need a selection and manipulation task for moving furniture and for menu selections. These two last tasks are defined as dependent: indeed selection possibilities are needed for interacting with the menu. In this scenario we use an Android tablet and a CAVE with active stereo. MiddleVR is used to handle the different screens and clustering. Some novice users may not be confident with 3D interactions and may prefer more commons multi-touch interactions. With D3PART, the user can distribute the selection and manipulation operations on the tablet and the navigation in the CAVE. The user will be able to interact with the usual and easy-to-use tablet multi-touch capabilities while being immersed at scale one in the CAVE. The tablet would act like a remote World-In-Miniature [17]. To do so, the user chooses a partial migration to the CAVE, only the navigation task migrates to the distant platform. Other tasks remain on the tablet. This choice is made with the meta-user interface as shown in Figure 4. On the tablet, for the furniture control task, a 2D menu is instantiated with the list of furniture that can be added. For the manipulation task, an interaction technique based on the multi-touch capabilities of the tablet is deployed. With this technique the user can translate the objects onto the floor with one finger and rotate them around the up axis with two fingers. In the CAVE, an interaction technique based on a walking metaphor controlled with head tracking and a joystick deployed for the navigation task. It places the point of view inside the room in order to immerse the user in it. At this time the application is distributed on two platforms and displays as shown in Figure 5. A remote World-In-Miniature is on the tablet and at the same time the user is immersed at scale one into the room in the CAVE. The synchronization of the 6 DoF transforms of the objects between the two platforms ensures consistency when the user moves an object on the tablet. As well, the command for adding an object into the room is also synchronized. Both systems runs approximately at 25 fps. The difference of frame rates does not impact the synchronization. The meta-user interface is also available in the CAVE. Therefore, the user can migrate back the full application to the tablet when he has finished. Other scenarios of redistribution could also be imagined with D3PART. For instance, a user interacting on a tablet that migrates all his application to a CAVE in order to continue his work with 3D interactions. Another possibility is to fully replicate its application to a colleague platform in order to start a collaboration with him. 5 Conclusion and Future Work D3PART is a new model to handle plasticity and redistribution for 3D user interfaces. With D3PART, redistribution can be performed on the display, platform and user dimensions and can target three levels of granularity: application, workspace, and domain concept levels. Redistribution can be performed at runtime by the user with an integrated user interface: the meta-user interface. Dynamic recasting handled by D3PART, with the included adaptation process, ensures usability continuity whatever the new distribution chosen. Future work will consist in automating the redistribution process to make it possibly system-initiated or mixed-initiated, which could consist in finding the right platform or the right user for each task according to the platforms capabilities and the user preferences. We could also consider level of details during the virtual environment replication as each platform may not have all the same computation capabilities. Last, we will evaluate the system to assess its interest, its usability and its acceptability for end users. References
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-01293037/file/redistribution_draft.pdf", "len_cl100k_base": 4475, "olmocr-version": "0.1.48", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 17032, "total-output-tokens": 5985, "length": "2e12", "weborganizer": {"__label__adult": 0.000484466552734375, "__label__art_design": 0.0041656494140625, "__label__crime_law": 0.0004382133483886719, "__label__education_jobs": 0.0013275146484375, "__label__entertainment": 0.00027370452880859375, "__label__fashion_beauty": 0.00026416778564453125, "__label__finance_business": 0.0002799034118652344, "__label__food_dining": 0.0004673004150390625, "__label__games": 0.0015325546264648438, "__label__hardware": 0.0025615692138671875, "__label__health": 0.0006885528564453125, "__label__history": 0.0006761550903320312, "__label__home_hobbies": 0.000141143798828125, "__label__industrial": 0.00067901611328125, "__label__literature": 0.0004968643188476562, "__label__politics": 0.0003139972686767578, "__label__religion": 0.0007233619689941406, "__label__science_tech": 0.265869140625, "__label__social_life": 0.00013005733489990234, "__label__software": 0.030242919921875, "__label__software_dev": 0.68701171875, "__label__sports_fitness": 0.0003323554992675781, "__label__transportation": 0.00070953369140625, "__label__travel": 0.00033354759216308594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26088, 0.03465]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26088, 0.14093]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26088, 0.90302]], "google_gemma-3-12b-it_contains_pii": [[0, 1105, false], [1105, 6315, null], [6315, 13296, null], [13296, 17450, null], [17450, 24193, null], [24193, 26088, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1105, true], [1105, 6315, null], [6315, 13296, null], [13296, 17450, null], [17450, 24193, null], [24193, 26088, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26088, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26088, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26088, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26088, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26088, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26088, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26088, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26088, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26088, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26088, null]], "pdf_page_numbers": [[0, 1105, 1], [1105, 6315, 2], [6315, 13296, 3], [13296, 17450, 4], [17450, 24193, 5], [24193, 26088, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26088, 0.0]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
4023fad3a03668969a32be2cead1ca480e63c021
UNDERSTANDING AGILITY IN SOFTWARE DEVELOPMENT THROUGH A COMPLEX ADAPTIVE SYSTEMS PERSPECTIVE <table> <thead> <tr> <th>Journal:</th> <th>17th European Conference on Information Systems</th> </tr> </thead> <tbody> <tr> <td>Manuscript ID:</td> <td>ECIS2009-0561.R1</td> </tr> <tr> <td>Submission Type:</td> <td>Research Paper</td> </tr> <tr> <td>Keyword:</td> <td>Information Systems Development (ISD), Complexity / Complex adaptive systems, Agile computing, Case Study</td> </tr> </tbody> </table> UNDERSTANDING AGILITY IN SOFTWARE DEVELOPMENT FROM A COMPLEX ADAPTIVE SYSTEMS PERSPECTIVE Xiaofeng Wang, Lero, the Irish Software Engineering Research Centre, Limerick, Ireland, xiaofeng.wang@ul.ie Kieran Conboy, National University of Ireland, Galway, Ireland, kieran.conboy@nuigalway.ie Abstract Agile software development methods have emerged in recent years and have become increasingly popular since the start of the century. While much research claims to study agile methods, the meaning of agility itself in software development is yet to be fully understood. Agility is viewed by some as the antithesis of plan, structure discipline and bureaucracy. This study aims to develop a better understanding of agility, using the key concepts of Complex Adaptive Systems as a theoretical lens. The study explores agility from several different angles, including autonomous team, stability and uncertainty, and team learning. A multiple case study research method was employed. The findings of the study emphasize that agility is manifested as stability and discipline, which are just as desirable as flexibility, and context sharing is of the same value and importance as knowledge sharing. In addition, the collective nature of learning is underlined. Keywords: agility, complex adaptive systems, autonomy, stability, team learning 1 INTRODUCTION The last ten years or so has seen the emergence of agile software development methods as a response to the inefficiency of existing software development methods in rapidly changing environments (Highsmith 2002), e.g. eXtreme Programming (XP) (Beck 1999) and Scrum (Schwaber & Beedle 2002). A brief reflection on the history of the agile software development movement, however, reveals that agile methods originated as a set of techniques and practices, and the term agile is more a post-rationalization to justify a set of existing “light-weight” methods. Agility in software development has been interpreted in many different ways in practice. Skepticism and criticism of agile methods place agility to the opposite of plan, structure and discipline which are generally considered the core components of more traditional waterfall methods (Rakitin 2001, Stephens & Rosenberg 2003). To clarify the meaning of agility, Conboy and Fitzgerald (2004) conduct a review of the literature on agility across several disciplines including manufacturing, business and management, and carefully distinguish several intertwined concepts, including flexibility and leanness. Based on the comparison and contrast of these concepts, they provide a broad definition of agility as “the continual readiness of an entity to rapidly or inherently, proactively or reactively, embrace change, through high quality, simplistic, economical components and relationships with its environment” (Conboy & Fitzgerald 2004, p.40). Lyytinen and Rose (2006) explore agility in an information systems development (ISD) context. They claim that ISD agility is concerned with why and how ISD organizations sense and respond swiftly as they develop and maintain information system applications. They outline a theory of ISD agility drawing upon a model of Information Technology (IT) innovation and organizational learning which adopts March’s (1991) concepts of exploration and exploitation. Their empirical study shows that the concept of ISD agility is more multifaceted and contextual than conceived so far in the literature. It relates to being nimble in terms of the velocity to absorb base innovations and innovate with IS products; the velocity to shift from one innovation regime to another (organizational flexibility); the velocity to learn from experiences (trial and error learning); and the velocity to deliver IS solutions. Each one of these demands different competencies and expects managerial shaping of alternative organizational goals and incentives. Their findings suggest that the dynamics and interactions between these four types of agility form different ecological niches. Each one follows a different organizing logic. Managers must view the meaning of agility differently in each niche. While these studies help to understand agility, and do highlight the lack of theoretical foundation regarding agility in an ISD context, they do not address specifically how agility is manifested in software development environments. Based on this observation, this study investigates the meaning of agility in software development using the lenses of Complex Adaptive Systems (CAS), an important branch of the complexity study which provides insights of how a system can be adaptive to its environment. (Note that in the following sections the full phrase complex adaptive system is used to refer to an instance of a complex system that demonstrates an adaptive nature, while CAS is used to refer to the study and theory of such systems.) The empirical part of the study employs a multiple-case study approach. The remaining part of the paper is organized as follows. Section 2 introduces the key concepts of CAS and builds a conceptual framework based on CAS which guides the empirical investigation; Section 3 describes the research method and the context of the empirical study; then the findings are presented in Section 4 and discussed in Section 5. The paper ends up with a conclusion section where the implications and limitations of the study are reviewed and the future work summarized. 2 A COMPLEX ADAPTIVE SYSTEMS PERSPECTIVE ON AGILITY A complex adaptive system, roughly defined, consists of a large number of agents, each of which behaves according to some set of rules. These rules require agents to adjust their behaviour to that of other agents. They interact with, and adapt to, each other. CAS seeks to identify common features of the dynamics of such systems or networks in general (Stacey 2003). There is no single and definitive account of CAS. Anderson (1999), Mitleton-Kelly (2003) and Stacey (2003) provide valuable introductions to CAS in the context of organization and management. Four key concepts of CAS in the centre of these accounts are of particular relevance to this study: inter-connected autonomous agents, self-organization, the edge of chaos and emergence. These key concepts provide a new perspective to investigate different facets of agility as a desirable property for software development teams in constantly changing environments. The concepts of inter-connected autonomous agents and self-organization suggest that, to be agile, a software development team should be composed of autonomous members who have their own schemata, which generally refer to norms, values, beliefs, and assumptions that are held by individuals (Senge 1990, Schein 1997). Team members are interconnected in such a way that a decision or action by any individual may affect related individuals and the team. A team composed of autonomous but inter-connected members can spontaneously come together to perform a task (or for some other purpose); the team decides what to do, how and when to do it; and no one outside the group directs those activities (Mitleton-Kelly 2003). To do so, a team needs energy imported into and constantly flowing within it, which can be interpreted, partly, as the sharing of information, knowledge or other resources needed to sustain self-organized activities. The edge of chaos provides organizations “with sufficient stimulation and freedom to experiment and adapt but also with sufficient frameworks and structure to ensure they avoid complete disorderly disintegration” (McMillan 2004, p. 22). Brown and Eisenhardt (1998) contend that, to compete at the edge, organizations must understand what to structure and what not to structure, to foster communication and to capture cross-business synergies. The edge of chaos concept suggests that being agile is neither chaotic nor static. It needs stability but not so much that order prevails and innovation is stifled. It is a delicate balance of both. The concept of emergence sheds new light on learning, which can be seen as a collective behavior of creating new patterns of thought at the team level based on the interaction of individuals, instead of often seen exclusively as the provision of individual training. Learning means not only training or the acquisition of new skills, but also the gaining of insight and understanding which leads to new knowledge and behavior. When learning leads to new behavior, the team can be said to have adapted and evolved (Mitleton-Kelly 2003). An agile team facilitates team learning and generation of new knowledge. In addition, new knowledge needs to be shared to generate further new learning, knowledge and behavior. In summary, this study investigates the meaning of agility from three facets: autonomous but sharing team, stability with embraced uncertainty and team learning, as shown in Table 1. <table> <thead> <tr> <th>Facets of Agility</th> <th>Underlying CAS Concepts</th> <th>Relevant Studies</th> </tr> </thead> <tbody> <tr> <td>Autonomous but sharing team</td> <td>Inter-connected autonomous agents&lt;br&gt;Self-organization</td> <td>Anderson 1999; Choi et al. 2001; Mitleton-Kelly 2003</td> </tr> <tr> <td>Stability with embraced uncertainty</td> <td>The edge of chaos</td> <td>Brown and Eisenhardt 1998; Stacey 2003</td> </tr> <tr> <td>Team learning</td> <td>Emergence</td> <td>Mitleton-Kelly 2003; Stacey 2003</td> </tr> </tbody> </table> Table 1. Agility through the CAS perspective 3 RESEARCH APPROACH This study adopts an interpretivist stance, emphasizing that agility are situational and can be better understood through the understanding and sense making of people who are involved in software development. In particular, this study employs a qualitative approach, treating agility as a qualitative property of a software development team that can be better studied through words and the meanings people ascribe to them rather than numbers or frequencies. The specific research method used in this study is case study, which is an appropriate approach when a research phenomenon is investigated in its real-live context (Yin 2003). A multiple-case design is employed. Given the research focus of the study, the level of inquiry is at the team level, so it seems appropriate to take a software development team as a case. The unit of analysis is the software development team. Three software development teams - XPTeam A, XPTeam B and WaterfallTeam - from two different companies were chosen as the cases. XPTeam A is a representative case; XPTeam B is a confirming case of the first one; and WaterfallTeam is a contrasting case, following the strategy suggested by Yin (2003). The profiles of the three cases are shown in Table 2. XPTeam A is a software development team in SecureSoft, a small software house specialized in network security and management systems development. XPTeam B and WaterfallTeam are software development teams in WorldTech, a major IT company providing both IT projects and services. <table> <thead> <tr> <th>Team size</th> <th>XPTeam A</th> <th>XPTeam B</th> <th>WaterfallTeam</th> </tr> </thead> <tbody> <tr> <td>4</td> <td>8</td> <td>5</td> <td></td> </tr> </tbody> </table> <table> <thead> <tr> <th>Team composition</th> <th>3 developers, 1 project manager</th> <th>6 developers, 1 test manager, 1 project manager</th> <th>4 developers, 1 project manager</th> </tr> </thead> <tbody> <tr> <td>Development method</td> <td>XP</td> <td>XP</td> <td>Waterfall style mixed with some agile elements</td> </tr> <tr> <td>Years of method use</td> <td>4.5 - 5 years</td> <td>11 months to 1.5 years</td> <td>More than 5 years</td> </tr> <tr> <td>Location</td> <td>Co-located in an open office space</td> <td>Co-located in a semi-open office space</td> <td>Collocated in a semi-open office space</td> </tr> <tr> <td>Software developed</td> <td>Application for external customer</td> <td>Web application for internal use</td> <td>Backend application for internal use</td> </tr> </tbody> </table> Table 2. The profiles of the three cases Two rounds of data collection are conducted. The interval between the two rounds is six months. The main data collection method used is semi-structured face-to-face interviews. The questions are all open-ended. The members of each team are interviewed. Each interview lasts between 30 minutes to two hours. In all the cases, most interviewees are interviewed twice. Table 3 lists the people interviewed in each team. Documents regarding the development processes of the case teams are collected when available. Some non-participative observations are conducted as the opportunities occur. Field notes are taken during both rounds of data collection. <table> <thead> <tr> <th>First round interviews</th> <th>XPTeam A</th> <th>XPTeam B</th> <th>WaterfallTeam</th> </tr> </thead> <tbody> <tr> <td>1 group interview (with the 4 team members below), 4 individual interviews</td> <td>5 individual interviews</td> <td>1 individual interview</td> <td></td> </tr> <tr> <td>- Project manager</td> <td>- Project manager</td> <td></td> <td></td> </tr> <tr> <td>- Coach</td> <td>- Team lead</td> <td></td> <td></td> </tr> <tr> <td>- Developer A</td> <td>- Tech lead</td> <td></td> <td></td> </tr> <tr> <td>- Developer B</td> <td>- Developer A</td> <td></td> <td></td> </tr> <tr> <td>Second round interviews</td> <td>6 individual interviews</td> <td>3 individual interviews</td> <td></td> </tr> <tr> <td>2 group interviews (with the team members below), 3 individual interviews</td> <td>- Test manager</td> <td>- Project manager</td> <td></td> </tr> <tr> <td>- Coach</td> <td>- Project manager</td> <td></td> <td></td> </tr> <tr> <td>- Developer A</td> <td>- Team lead</td> <td></td> <td></td> </tr> <tr> <td>- Developer B</td> <td>- Tech lead</td> <td></td> <td></td> </tr> <tr> <td>- Developer C</td> <td>- Developer A</td> <td></td> <td></td> </tr> </tbody> </table> Table 3. Two rounds of interviews The data analysis includes two steps: within-case analysis and cross-case comparison (Eisenhardt 1989). The emphasis is on the cross-case comparison, in which an analysis tactic suggested by... Eisenhardt (1989) is used: the three cases were divided into two groups, XPTeam A and XPTeam B in one group as the cases using agile approach, while WaterfallTeam in the other group as the case that uses waterfall approach. XPTeam A and B are compared firstly for similarities and differences, and then they as a group are contrasted with WaterfallTeam for similarities and differences. 4 MANIFESTATION OF AGILITY IN THE THREE TEAMS This section presents how agility has been manifested (or shown to be absent) in the three cases. 4.1 Autonomous but sharing team Team autonomy in XPTeam A and B firstly is shown as competences relevant to software development being distributed among team members. The members of the two teams are involved in all development activities of their projects, and all have to deal with the customers, analyse user requirements and write code together. There are no traditional roles such as system analyst, designer or programmer. Each team member is able to assume all the roles, since comprehensive competences are required to work with user stories, the implementation of which is self-contained and encapsulates different development activities: “The problem is not to have three persons for analysis, or two persons for design, but a user story inside has to resolve analysis, developing, and, etc., everything.” (Project manager/XPTeam A) For example, when XPTeam B started the project, there were big gaps among team members in terms of Java related knowledge and skills. With the project going on, the developers with less Java experience learnt quickly from those more experienced, and the team members reached fairly the same level of competence. As a result, there is no dependency on a particular individual, since each team member gets exposure to different areas of a project. Distributed competence is shown in the case of WaterfallTeam too, although the team uses waterfall approach. Like the other two teams, there are no specific roles like analyst, designer or coder in the team. The developers are not specialized on specific tasks. Everybody has chances to do different things. Team autonomy is also manifested as a disciplined team in XPTeam A and B, which is seemingly contradictory to the idea of autonomy. However, both teams reckon the importance of disciplines. As a member of XPTeam B describes, disciplines are necessary components of an agile process, and they come from the process the team uses: “There is a set of rules really, and you may not adopt them, you probably adopt most of them, and those rules kind of direct you really, it’s like you need to formalize it so you can be more flexible.” (Test manager/XPTeam B) Team autonomy does not mean the team members are working on their own; instead, there is constant sharing among them. XPTeam A considers sharing an important aspect of team working. They believe that, as a team, they have to face every moment in any case without barriers. Sharing is also seen as a contributor to a team’s agility by WaterfallTeam who works with the waterfall approach. The difference is that sharing in the two teams using agile processes goes beyond simply knowledge sharing. It extends to context sharing and the sharing of achieved results. What is shared among the team members is not only the technical knowledge related to different areas of a project, which helps to distribute competences among them. It is also the knowledge about who knows what, which is particularly important for a bigger team like XPTeam B, and helps the team members self-organize to implement tasks: “I think the ten o’clock stand-up meeting is definitely good, because you know what everybody else on the project is working on, and you might say ‘I’m working on this and I’m not sure how to’… and someone says ‘oh yeah actually I did it yesterday’. “ (Developer B/XPTeam B) In XPTeam A and B, the developers are attentive to what happens around them, with the help of the open space the teams are working in: “When you are doing something, you have to listen what the pair, or the single one if you are in pair, what he’s doing, what they are saying, you have one ear in this way and the other (in the other way).” (Developer A/XPTeam A) Collective ownership of results is another kind of sharing. The two teams using the agile processes both endorse the collective ownership of code, as suggested by XP. A developer of XPTeam B, however, warns that collective ownership can become collective irresponsibility sometimes, which means no one claims to be responsible if there is some problem with a piece of code. In the case of XPTeam A, in addition to collective ownership of code, the team also owns collectively other forms of working results, such as designs, solutions, etc., which helps the team to have a sense of common achievement. 4.2 Stability with embraced uncertainty Stability for software development is a desired property by all three teams, which is seen as an indispensable component in responding to change: “There has to be some limitation of what you are doing, you cannot be so flexible that things are chopping and changing every single day.” (Test manager/XPTeam B) Stability first of all is demonstrated as a short-term certainty in all the three teams. The short-term certainty means a team has a very clear idea of what they have to do in a short time frame, such as one-week or two-week iterations in the cases of XPTeam A and B. WaterfallTeam also realizes the importance of the short-term certainty to deal with constant changes from the management: “Well I guess in terms of uncertainty, you don’t know really tomorrow you are going to work on the same project, so on a phased approach you can complete one phase and then this is done. And say after tomorrow, let’s say the next phase is cancelled because of the management decision, then you still have a product that works.” (Developer A/WaterfallTeam) In the cases of XPTeam A and B, stability is also shown as a sense of frequent achievement and satisfaction. The two XP teams realize that, with their agile process, the team members can be motivated more easily than with the waterfall method, since the developers can see the result of their work at the end of each iteration, rather than working for six months without anyone has ever seen or used the code produced as what can happen in traditional processes. There is evidence to suggest that the developers of WaterfallTeam also recognize the importance of motivating people, and believe that a satisfied and motivated team is a source preventing a project from falling apart: “If someone is not happy with what he’s doing, he’s not going to do his job well. If he doesn’t like it, he doesn’t like to co-operate, if he’s not happy with people, he wasn’t going too far... So the main thing is with people, keep them happy... because if people are unhappy, the project falls apart.” (Developer A/WaterfallTeam) In addition, a team focused on working is also a sign of stability. A focused team has several meanings in the two XP teams: one meaning is to focus on work in a short but appropriate amount of time. It can be an iteration, as in XPTeam A and B. Another meaning of being focused is to focus on current work, not wasting time to do future-proof work, which has been emphasized particularly in XPTeam B. The third meaning is to focus on development activities and not to mix them with personal desires of learning new things. For example, XPTeam A is very attentive of keeping the team focused on development activities by reserving daily studying time to satisfy the developers’ desires to learn. Last but not least, stability shows as team working at a sustainable pace, with ease and without anxiety, is another aspect of the stability for development. A developer of XPTeam A associates this working state with agility directly: “I think agility is a state of mind… you don’t have to feel anxiety, you have to be relaxed when you approach a problem, and XP or Scrum is just a method to obtain this kind of relaxity… If you are happy on what you are doing, if you are not stressed, I think you can say you are agile.” (Developer B/XPTeam A) Stability co-exists with uncertainty which is unavoidable in the teams using agile methods. Uncertainty needs to be embraced. Embraced uncertainty is manifested firstly as the probability to change directions in the cases. All the teams believe that the iterative nature of their processes gives them more possibility to change directions when needed, including WaterfallTeam, since they use iterative phases within the waterfall process. But the probability to change should be complemented by having a whole picture of the project, which has been emphasized in the two XP teams. XPTeam B observes that having a whole picture of the project occurs not only to the developers, but also to their onsite customer. 4.3 Team learning XPTeam A understands that learning means doing things differently. If a team wants to be adaptive and evolve, they have to learn. In the two XP teams, learning happens as team learning rather than individual learning, which means a team as a whole acquires new knowledge and competences, and the results of learning are shared among team members. Compared with WaterfallTeam, team learning happens continuously and mutually, through using agile practices in the two XP teams. It happens in daily development activities. It is a continuous experience for the team members. Meantime, since learning happens through interactions among the developers, it is generally bi-directional. A developer of XPTeam B comments: “I think it (XP) is a very good way of learning as well, because with pair programming which is part of it, you are learning from somebody different every day, and likewise you’re able to teach somebody else for you’ve been doing the day before … it gives a sense of shared, the project is shared… There’s more, definitely more knowledge been shared.” (Developer C/XPTeam B) Besides, learning is not a daunting experience due to the fact that the teams using the agile processes generally work on small pieces of tasks. The developers learn gradually through implementing them, sometimes with the help of others. The team lead of XPTeam B observes that: “Because it is down to granular level, it’s easier to put better workload over people and also easier for people to get involved, it’s also easier for people who don’t have skill learn gradually on the smaller story rather than having to develop something big on their own, so I think it’s easier to get a higher level skill without being overly complicated… They are not huge chunk of piece to take on.” (Team lead/XPTeam B) Due to these attributes, team learning is seen more efficient than individual learning: “The learning, when we do pair programming it’s more efficient. In one year I learn a lot of things that I didn’t think (I could do) when I was in the university.” (Developer B/XPTeam A) Table 4 summaries the findings. 5 DISCUSSION As shown in this study, agility in the context of software development is highly multifaceted and ambiguous. In this section the different facets of agility demonstrated in the cases are discussed by drawing on relevant agile literature. 5.1 An autonomous but sharing team Despite the suggestion by advocates of agile that software development processes should be organized to improve and distribute both technical and social competences continuously (Cockburn & Highsmith 2001), few empirical studies in agile research have supported this stance. Only Auvinen et al. (2006) highlight an increased competency in a team where several agile practices are piloted. Similarly, no empirical research in the reviewed literature focused on discipline in agile processes despite the emphasis many agilists place on its importance (e.g. Beck & Boehm 2003). <table> <thead> <tr> <th>Agility through CAS</th> <th>Manifested in software development</th> </tr> </thead> <tbody> <tr> <td>Autonomous but sharing team</td> <td>Distributed competences</td> </tr> <tr> <td></td> <td>Disciplined team</td> </tr> <tr> <td></td> <td>Knowledge sharing</td> </tr> <tr> <td></td> <td>Context sharing</td> </tr> <tr> <td></td> <td>Collective ownership of results</td> </tr> <tr> <td>Stability with embraced uncertainty</td> <td>Short-term certainty</td> </tr> <tr> <td></td> <td>Team being satisfied, motivated and focused</td> </tr> <tr> <td></td> <td>Working at a sustainable pace</td> </tr> <tr> <td></td> <td>Probability to change directions</td> </tr> <tr> <td></td> <td>Having a whole picture of the project</td> </tr> <tr> <td>Team learning</td> <td>Learning continuously</td> </tr> <tr> <td></td> <td>Mutual learning</td> </tr> <tr> <td></td> <td>Learning gradually</td> </tr> </tbody> </table> Table 4. Manifestation of agility in software development This study suggests that a team composed of autonomous but interacting developers has a tendency to be agile. Each of them is able to solve various development issues and to interact with customers. Competences are not concentrated on few people so that there is no bottleneck in the development process. Team members are confident and courageous in the interactions with customers and with each other. They are also mature and willing to try new things. An autonomous team, however, does not mean team members can be completely amethodical and ill-disciplined. On the opposite, it is composed of disciplined, self-responsible and committed individuals. Discipline is an essential component of an autonomous team, and is drawn from the interactions among peer team members. Sharing is a common theme investigated in several agile studies, though most are focused on knowledge sharing (Fredrick 2003, Melnik & Maurer 2004, Poole & Huisman 2001, Schatz & Abdelshafi 2005). Context sharing has also been observed, but is somewhat understated in agile literature. Melnik and Maurer (2004) believe that the so-called “background knowledge” about a project is important to achieve effective communication. It is important for all team members to have a common frame of reference - a common basis of understanding. Poole and Huisman (2001) observe that, in the organisation they studied, there was a measurable increase in the visibility of what everyone was doing on the team subsequent to the adoption of the agile practices. In fact, this improvement in visibility is considered one of the greatest successes the company has achieved. In terms of results sharing, Fredrick (2003) reports the experience of collective ownership of codes. When it is realized, even the most complex business problems can be easily figured out. In contrast, it was found that individual ownership of code made people defensive - people took it personally when someone suggested their code did not work. Schatz and Abdelshafi (2005) also document the collective ownership in their experience report where developers took ownership of the features they created and took pride in showing their work to the stakeholders during sprint reviews. Rising and Janoff (2000) notice that in a team they have studied, at every meeting, as small tasks were completed and the team could see progress toward the goal, everyone was more satisfied with their work and project progress. The findings of this study confirm that sharing in an agile team not only means knowledge sharing. Context sharing is equally important. To effectively self-manage, a team needs to share the understanding of their working context. Context sharing is a precondition to provide effective feedback, interpret them in a sensible way, and take appropriate actions. Sharing also means results sharing, such as collective ownership of code and solutions, which reduces the risk of knowledge loss and increases the sense of being a true team. Another type of sharing, namely problems sharing, is reported by Rising and Janoff (2000) but does not emerge in this study. In the team they have studied, when one team member raises an obstacle in the Scrum meeting, the entire team’s resources come together to bear on that problem, and the entire team immediately owns any one individual’s problems. 5.2 Stability with embraced uncertainty Several agile studies have noticed team satisfaction and motivation in agile processes (e.g. Rising & Janoff 2000, Poole & Huismann 2001, Drobka et al. 2004). For example, Drobka et al. (2004) conduct a survey of a team using XP and find that it creates a surge in morale since XP provides constant feedback to the developers and at the end of each day the team has a working product. Team members gain a sense of accomplishment from their daily work, because they immediately see the positive impact their efforts have on the project. When morale is high, people are excited about their work, leading to a more effective, efficient development team. Short-term certainty has also been noticed in agile studies, though not so extensively. Murru et al. (2003) claim that XP enhances programmers’ sense of project control. They find that programmers with the experience of Rational Unified Process (RUP) felt that XP’s planning game gave them a stronger feeling of control than traditional planning did. They knew where their project was going and whether it was delayed. Furthermore, programmers were more aware of keeping the project’s strategic goals in focus. This knowledge improved the programmers’ motivation. The role played by uncertainty is acknowledged by agile advocates (Highsmith & Cockburn 2001, Williams and Cockburn 2003). Williams and Cockburn (2003) believe that uncertainty is inevitable in all software development. Many changes occur during the time that the team is developing the product. It is highly unlikely that any set of predefined steps will lead to a desirable, predictable outcome. It is necessitated short “inspect-and-adapt” cycles and frequent, short feedback loops. Agile software development is about change and feedback. Highsmith and Cockburn (2001) claim that agile organizations and managers understand that to demand certainty in the face of uncertainty is dysfunctional, and agile practices encourage change rather than discourage it. In turbulent business situations, the change tolerance of a development process must be geared to the change rate of a specific environment, not some internal view of how much change is acceptable. Despite these claims of agile proponents, however, few empirical studies of agile processes have focused on uncertainty and how it is embraced, with the exception of Elssamady and Schalliol (2002) who suggest that, when using the XP practices, especially the simple design, one should look ahead and do things incrementally, in order to have a big picture. This study emphasizes stability as a desired property of development teams that have to deal with continuous changes due to close relationships with customers and evolving requirements. A team needs stability, needs to find a proper heartbeat for their development process so that it would not be dissolved into turbulence. Stability gives developers a sense of security and control over what they are working on. It can be drawn from a short-term certainty provided by a time-boxed development process. Stability for development also means a team is working at a sustainable pace, focused and motivated, working with ease and satisfaction. Certainty and security is only for a short term, however. Uncertainty is inevitable in software development. It comes from both the environment a team is embedded in and the development process itself. Managing uncertainty does not mean to predict what is going to happen and do future proof work today. It is to ensure the probability to change the direction a team goes towards but meantime not to get short-sighted. Team members need to have a whole picture of the project in mind. 5.3 Team learning Learning is a common theme explored in much agile research (e.g. Dingsoyr & Hanssen 2002, Drobka et al. 2004, Hunt & Thomas 2003, Meso & Jain 2006), but the focus is mainly on individual rather than team learning. In a survey conducted by Drobka et al. (2004), it was found that XP can reduce the learning curve for new team members. Fifty-five percent of the developers believed that using XP shortened their initial project-learning curve. Hunt and Thomas (2003) emphasize that learning in an agile process is a continuous process, and it means learning about more than just the technology involved. It covers how the team works together (or how it doesn’t) and team members themselves, which leads to behavior and mental model change. Learning means doing things differently. One important consequence of learning for an individual and a team is to change either their behavior or mental model. It is a prerequisite for organizational evolution and co-evolution (Mitleton-Kelly 2003). This study suggests that team learning is different than individual learning, though closely related and dependent on it. From the CAS perspective, team learning is emergent from the interactions of team members. Team learning is a collective result, which means a team as a whole acquires new knowledge and competences, and results of individual learning are shared among team members. In an agile team, team learning happens constantly, mutually and gradually. 6 CONCLUSION This study investigates how agility is manifested in agile software development through studying the software development processes of three teams, two using XP, one using waterfall approach. Taking the key concepts of CAS as theoretical lenses, the study explores the true meaning of being agile from several different angles, including autonomous team, stability and uncertainty, and team learning. Compared with the existing agile literature, the findings emphasize that stability and discipline are as desirable as flexibility, and context sharing is of the same value and importance as knowledge sharing in agile processes. In addition, the collective nature of learning is underlined. The main theoretical contribution of the study is the understanding of agility in software development which is both theory-informed and empirically grounded. Drawn on CAS, the study lifts the understanding beyond the advocational literature found in agile field (Baskerville & Pries-Heje 2004). The discovered agile properties enrich the understanding of agility in software development. The practical implication of the study is that the findings indicate the desired effects of using agile methods. The agile properties provide a software development team with observable agile indicators from different facets of software development. One limitation of the study comes from using CAS as the theoretical basis of the study. CAS has originated in natural sciences. There is a deeper concern whether CAS is appropriate to the study of human organizations. A combination of CAS theory with appropriate social theories might be a promising avenue for future research. Some limitations are associated with the case study research approach. One concern is the uniqueness of corporate, team and project characteristics of each case which makes valid comparison and theoretical generalization of the case study results difficult (Kitchenham et al. 2002). Specific to this study, an affecting factor is the diversity of the team profiles (as shown in Table 2). To increase the validity of the study, the contextual information of the cases has been taken into account in the data analysis. Another limitation is that only one agile method, XP, has been involved in our case studies. Future work would be to verify if the findings apply to teams using other agile methods, such as Scrum, Lean system development, etc. References
{"Source-Url": "https://ulir.ul.ie/bitstream/handle/10344/2105/2009_Wang,X.pdf;sequence=2", "len_cl100k_base": 8139, "olmocr-version": "0.1.51", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 30396, "total-output-tokens": 9915, "length": "2e12", "weborganizer": {"__label__adult": 0.0003740787506103515, "__label__art_design": 0.0002853870391845703, "__label__crime_law": 0.0003173351287841797, "__label__education_jobs": 0.002101898193359375, "__label__entertainment": 4.780292510986328e-05, "__label__fashion_beauty": 0.0001672506332397461, "__label__finance_business": 0.00048828125, "__label__food_dining": 0.0003669261932373047, "__label__games": 0.0004432201385498047, "__label__hardware": 0.0005054473876953125, "__label__health": 0.0004963874816894531, "__label__history": 0.00021004676818847656, "__label__home_hobbies": 8.165836334228516e-05, "__label__industrial": 0.0003528594970703125, "__label__literature": 0.0002808570861816406, "__label__politics": 0.0002853870391845703, "__label__religion": 0.0004048347473144531, "__label__science_tech": 0.005489349365234375, "__label__social_life": 0.0001087188720703125, "__label__software": 0.0032806396484375, "__label__software_dev": 0.98291015625, "__label__sports_fitness": 0.0003407001495361328, "__label__transportation": 0.0005297660827636719, "__label__travel": 0.00020766258239746096}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43052, 0.02634]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43052, 0.32592]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43052, 0.93936]], "google_gemma-3-12b-it_contains_pii": [[0, 506, false], [506, 1844, null], [1844, 6293, null], [6293, 10210, null], [10210, 14053, null], [14053, 17921, null], [17921, 21938, null], [21938, 25579, null], [25579, 29701, null], [29701, 33905, null], [33905, 37910, null], [37910, 41678, null], [41678, 43052, null]], "google_gemma-3-12b-it_is_public_document": [[0, 506, true], [506, 1844, null], [1844, 6293, null], [6293, 10210, null], [10210, 14053, null], [14053, 17921, null], [17921, 21938, null], [21938, 25579, null], [25579, 29701, null], [29701, 33905, null], [33905, 37910, null], [37910, 41678, null], [41678, 43052, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 43052, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43052, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43052, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43052, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43052, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43052, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43052, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43052, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43052, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43052, null]], "pdf_page_numbers": [[0, 506, 1], [506, 1844, 2], [1844, 6293, 3], [6293, 10210, 4], [10210, 14053, 5], [14053, 17921, 6], [17921, 21938, 7], [21938, 25579, 8], [25579, 29701, 9], [29701, 33905, 10], [33905, 37910, 11], [37910, 41678, 12], [41678, 43052, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43052, 0.26404]]}
olmocr_science_pdfs
2024-12-04
2024-12-04
78a289c30db0d27fca12061e30b823e0900ceb9e
Towards QoS-Oriented SLA Guarantees for Online Cloud Services Damián Serrano, Sara Bouchenak, Yousri Kouki, Thomas Ledoux, Jonathan Lejeune, Julien Sopena, Luciana Arantes, Pierre Sens To cite this version: HAL Id: hal-00780000 https://hal.archives-ouvertes.fr/hal-00780000v2 Submitted on 2 Feb 2016 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Towards QoS-Oriented SLA Guarantees for Online Cloud Services Damián Serrano, Sara Bouchenak University of Grenoble – LIG Grenoble, France {Firstname.Lastname}@imag.fr Yousri Kouki, Thomas Ledoux EMN – INRIA – LINA Nantes, France {Firstname.Lastname}@inria.fr Jonathan Lejeune, Julien Sopena, LIP6 – INRIA, Paris, France {Firstname.Lastname}@lip6.fr Abstract—Cloud Computing provides a convenient means of remote on-demand and pay-per-use access to computing resources. However, its ad hoc management of quality-of-service and SLA poses significant challenges to the performance, dependability and costs of online cloud services. The paper precisely addresses this issue and makes a threefold contribution. First, it introduces a new cloud model, the SLAaaS (SLA aware Service) model. SLAaaS enables a systematic integration of QoS levels and SLA into the cloud. It is orthogonal to other cloud models such as SaaS or PaaS, and may apply to any of them. Second, the paper introduces CSLA, a novel language to describe QoS-oriented SLA associated with cloud services. Third, the paper presents a control-theoretic approach to provide performance, dependability and cost guarantees for online cloud services, with time-varying workloads. The proposed approach is validated through case studies and extensive experiments with online services hosted in clouds such as Amazon EC2. The case studies illustrate SLA guarantees for various services such as a MapReduce service, a cluster-based multi-tier e-commerce service, and a low-level locking service. Keywords—SLA; QoS; Cloud Computing; Specific Language; Online Control; I. INTRODUCTION Cloud Computing is a paradigm for enabling remote, on-demand access to a set of configurable computing resources [1]. This model aims to provide hardware and software services to customers, while minimizing human efforts in terms of service installation, configuration and maintenance, for both cloud provider and cloud customer. A cloud may have the form of an Infrastructure-as-a-Service (IaaS), a Platform-as-a-Service (PaaS) or a Software-as-a-Service (SaaS). However, cloud’s ad-hoc management in terms of quality-of-service (QoS) and Service Level Agreement (SLA) poses significant challenges to the performance, availability, energy consumption and economical costs of the cloud. Existing public clouds provide very few guarantees in terms of performance and dependability [2]. This is the case for Amazon EC2 compute service and Amazon S3 storage service [3], Rackspace Cloud Servers compute service and Rackspace Cloud Files storage service [4], Azure Compute and Azure Storage [5]. We believe that a differentiating element between Cloud Computing environments will be the QoS and the SLA provided by the cloud. This raises the following questions: (i) How to consider SLA in a general way for different cloud environments? (ii) How to describe the SLA terms between cloud provider and cloud customer, such as service levels, penalties in case of SLA violation, etc. (iii) How to provide guarantees on cloud QoS and provide better than best-effort behavior for clouds? The contributions of this paper are as follows: - A novel cloud model is proposed: SLAaaS (SLA-aware Service). The SLAaaS model enriches the general paradigm of Cloud Computing, and enables systematic and transparent integration of service levels and SLA into the cloud. SLAaaS is orthogonal to IaaS, PaaS and SaaS clouds and may apply to any of them. - A specific language is introduced to describe QoS-oriented SLA associated with cloud services, the CSLA (Cloud Service Level Agreement) language. - A control-theoretic approach is described to provide performance, dependability and cost guarantees for online cloud services, with time-varying workloads. - Three case studies running on private clusters and Amazon EC2 public cloud illustrate the soundness of the proposed approach. These include the first SLA-oriented dynamically provisioned MapReduce service, a multi-tier e-commerce service, and a SLA-oriented locking service. The rest of the paper is organized as follows. Section II introduces the proposed SLAaaS cloud model, CSLA language and online cloud control. Section III presents the experimental case studies. Section IV reviews the related work, and Section V draws our conclusions. II. SLAaaS CLOUD MODEL A. Background In this section, we first provide preliminary definitions before introducing the SLAaaS cloud model. A cloud provides a set of services. A cloud service exposes a functional interface with operations to call on the cloud. For instance, an IaaS cloud as Amazon EC2 exposes a functional interface that allows users to acquire compute instances, to run software on these instances or to release instances. Amazon S3 IaaS cloud service exposes a functional interface that allows users to store, read or delete any amount of data. Amazon RDS PaaS cloud provides a relational database... service that makes it easy to set up, operate, and scale a relational database. Google Apps SaaS cloud provides a set of services with functional interfaces, such as Google Drive that allows users to create, update and share documents. Besides the functional aspects of a cloud service, there are also non-functional aspects related to the quality-of-service. There are different QoS aspects, such as performance, availability, reliability, cost, etc. For each QoS aspect, multiple QoS metrics may be considered. Examples of performance metrics are service response time that is the necessary time for a user request to get served, service throughput that reflects cloud service scalability, etc. Examples of availability metrics are service abandon rate that is the ratio of accepted service requests to the total number of requests, or service use rate that is the ratio of time a cloud service is used to the total time. Examples of reliability metrics are mean time between failures which is the predicted elapsed time between inherent failures of the service, or mean time to recover which is the average time that a service takes to recover from a failure. Finally, examples of cost metrics are the energetic cost that reflects the energy footprint of a service, or the financial cost of using a cloud service. Thus, a QoS metric is a means to quantify the service level with regard to a QoS aspect. One might want a service level to attain a given objective that is the Service Level Objective (SLO). A SLO has usually one of the following forms: provide a QoS metrics with a value higher/lower than a given threshold, maximize/minimize the QoS metrics, etc. Therefore, a Service Level Agreement (SLA) is a set of SLOs to meet and is negotiated between two parties, the cloud service provider and its customer. B. SLAaaS Model We introduce SLA-aware-Service (SLAaaS), a new cloud model that defines a non-functional interface which exposes the SLA associated with a cloud functional service. Figure 1 illustrates the SLAaaS model at three cloud levels: an Infrastructure-as-a-Service cloud, a Platform-as-a-Service cloud and an example of a Software-as-a-Service cloud that represents here a business intelligence system. The example of this figure shows four levels: an end-user is a client of the SaaS cloud, which is itself a client of the PaaS cloud, which is itself a client of the IaaS cloud. Roughly speaking, the functional interface of a cloud exposes operations that allow a cloud customer to get new resources from the cloud, to access/use resources in the cloud or to release resources that he/she does not use anymore. With SLAaaS, the cloud also exposes the SLA non-functional interface. Furthermore, SLAaaS aims to provide SLA-oriented cloud reconfiguration, and SLA governance. Due to space limitation, we focus on the former in the rest of the paper. SLAaaS first allows a user to select the QoS aspects he/she is interested in (e.g. performance, cost), and the QoS metrics for these aspects (e.g. service response time, financial cost). The user can then choose the SLOs he/she wants to apply on the QoS metrics. For instance, the SLO for the service response time might be to guarantee that the response time never exceeds a given threshold, and the SLO for the financial cost might be to guarantee that the cost is minimized. Then, the SLA is defined as the combination of SLOs. Furthermore, the SLA between a cloud service and its customer may include additional information, such as the agreed confidence level (e.g. SLOs are guaranteed with a confidence of 95%), or the penalties applied in case of SLA violation. Figure 2 presents three examples of SLAs that apply at three different cloud levels, between the end-user and the SaaS, between the SaaS and the PaaS, or between the PaaS and the IaaS. <table> <thead> <tr> <th>SLA between the end-user and the SaaS cloud</th> </tr> </thead> <tbody> <tr> <td>SLOs</td> </tr> <tr> <td>Confidence</td> </tr> <tr> <td>Penalty</td> </tr> </tbody> </table> <table> <thead> <tr> <th>SLA between the SaaS and PaaS cloud</th> </tr> </thead> <tbody> <tr> <td>SLOs</td> </tr> <tr> <td>Confidence</td> </tr> <tr> <td>Penalty</td> </tr> </tbody> </table> <table> <thead> <tr> <th>SLA between the PaaS and IaaS cloud</th> </tr> </thead> <tbody> <tr> <td>SLOs</td> </tr> <tr> <td>Confidence</td> </tr> <tr> <td>Penalty</td> </tr> </tbody> </table> In SLAaaS, the cloud SLA is defined with the CSLA language introduced in Section II-C, and the SLA is guaranteed following a control-theoretic approach, as described in Section II-D. C. CSLA Specific Language CSLA, the Cloud Service Level Agreement language, allows to describe a SLA between a cloud service provider and its customer by defining QoS guarantees in the form of SLO clauses [6]. The clauses are combined using "and" and "or" operators. Previous efforts to define SLA for web services (WSLA) [7] and service oriented architectures (SLA@SOI) [8] have influenced the design of this language. Among its novelties, CSLA integrates features dealing with QoS uncertainty and cloud fluctuations: fuzziness, confidence, and penalty. Fuzziness defines the acceptable margin degree around the target value of a SLO. Confidence is the compliance percentage of SLO clauses. Lastly, penalties are applied in case of SLA violations to compensate cloud service customers, i.e. penalties reduce the service price. The reduction can be applied either as a constant or variable rate. In the latter case, the request price is modeled as: \[ P = \alpha - \beta \cdot dt \] where \(\alpha\) is the price with no violations (\(\alpha > 0\)), \(\beta\) is the penalty rate (\(\beta > 0\)) and \(dt\) is the absolute difference between the actual value and the SLO threshold. For example, if a SLO indicates a maximum response time of 3 s per request, with a request price \(\alpha = 0.8\) s, and a penalty rate \(\beta = 0.5\), a request with response time of 4 s costs \(P = 0.8 - 0.5 \cdot |4 - 3| = 0.3\) s. The CSLA syntax is defined according to the grammar generated from the meta-model in [6]. Based on the CSLA meta-model, the SLA can be defined in any language for any cloud service. In this paper, we use XML as a representation format. Figure 3 presents an example of a CSLA file describing the SLA between a SaaS provider and its customer. Two SLOs are composed using the "and" operator, one is a performance SLO and the other, a dependability SLO. The performance SLO specifies that the request response time must be below 10 seconds, with an acceptable margin less than 1 second. The dependability SLO specifies that service abandon rate should not exceed 3% of incoming requests, with an acceptable margin of 0.2%. SLOs are guaranteed at least 95% of requests to the cloud service (confidence). Thus, if more than 5% of the cloud service requests violate the SLOs, a reduction of $0.1 is applied in the price of each request violating the SLA. According to a cloud service, a SLA template is generated with pre-defined parameters to ensure that offered QoS guarantees are realistic and realizable. Finally, once a SLA is described with CSLA and established between a cloud service provider and a cloud customer, it is passed to an online cloud controller as described in the next subsection. D. Online Cloud Control The online control of cloud services is based on a general feedback control loop as described in Figure 4. To manage cloud SLA in a principled way, we follow a control-theoretic approach to design fully autonomic SLA-oriented cloud services. The general approach consists in three main steps. First, a utility function is defined to precisely describe the set of SLOs as specified in the cloud SLA, the weights assigned to these SLOs if any, and the possible trade-offs and priorities between the SLOs. The cloud service configuration (i.e. how many resources, what is their combination) with the highest utility is the best regarding SLA guarantees. Then, control theory techniques are applied to model cloud service behavior, and propose control laws and algorithms for fully autonomic SLA-oriented cloud services. The challenges for modeling cloud services are to build accurate models that are able to capture the non-linear behavior of cloud services, and that are able to self-calibrate to render the variations of service workloads. The challenges for controlling cloud services are to propose accurate and efficient algorithms and control laws that calculate the best service configuration, and rapidly react to changes in cloud services. service usage. The next section illustrates this approach to control online cloud services to guarantee their SLA. III. CASE STUDIES We illustrate in this section how to build SLAaaS cloud services with three use cases: a MapReduce service, a multi-tier service, and a distributed locking service. A. Experimental Environment The experiments presented in this section were conducted in a cluster running on Amazon EC2 [3], and in two clusters running in Grid’5000 [10], see the hardware configuration in Table I. The underlying software configuration is as follows. Amazon EC2 instances run Fedora Linux 8 with kernel v2.6.21. Nodes in Grid’5000 (i.e. G5K I and G5K II) run Debian Linux 6 with kernel v2.6.32. Experiments of Section III-B use Apache Hadoop v1.0 MapReduce framework, Java 6, and the high-level MRBS benchmark suite [11]. Experiments of Section III-C use Apache Tomcat v7 web server, MySQL v.5.5.1 database server, and the TPC-W benchmark [12]. Finally, experiments of Section III-D are based on C++ and OpenMPI, and use micro-benchmarks. Table I <table> <thead> <tr> <th>Cluster</th> <th>CPU</th> <th>Memory</th> <th>Storage</th> <th>Network</th> </tr> </thead> <tbody> <tr> <td>Amazon EC2</td> <td>large instances, 4 EC2 Compute Units in 2 virtual cores</td> <td>7.5 GB</td> <td>850 MB</td> <td>10 Gbit Ethernet</td> </tr> <tr> <td>G5K I</td> <td>4-core 2.5 GHz Intel Xeon E5420 QC</td> <td>8 GB</td> <td>136 GB</td> <td>1 Gbit Ethernet</td> </tr> <tr> <td>G5K II</td> <td>4-core 2.53 GHz Intel Xeon X3440</td> <td>16 GB</td> <td>278 GB</td> <td>Infiniband 20G</td> </tr> </tbody> </table> B. SLAaaS-Oriented MapReduce PaaS MapReduce is a programming model and a software framework to support distributed computing and large data processing on clusters of commodity machines [13]. High performance and fault-tolerance are two key features of MapReduce. They are achieved by automatic task scheduling in MapReduce clusters, automatic data placement, partitioning and replication, and automatic failure detection and task re-execution. A MapReduce job, i.e. an instance of a running MapReduce program, is automatically divided into multiple tasks scheduled by the MapReduce framework to run in parallel on cluster nodes. MapReduce is usually provided as a Platform-as-a-Service by cloud providers, such as Amazon and Azure. The functional interface of such a service includes operations such as starting a MapReduce cluster of a given size (i.e. #nodes), running a job on a MapReduce cluster, or stopping a MapReduce cluster. We consider the case of a MapReduce PaaS that follows the SLAaaS model to illustrate the proposed approach. Thus, a SLA is contracted between the MapReduce PaaS and its customer. Figure 5 provides an example of the SLA. It specifies that the MapReduce job response time should not exceed 90 seconds, while the MapReduce cluster size (i.e. #nodes) should be kept as small as possible. In order to guarantee the SLA, we applied a control-theoretic approach to provide a SLA-oriented self-elastic MapReduce cluster. Although some initiatives exist to add elasticity to MapReduce [14], [15], as far as we know, this is the first attempt to provide fully self-elastic MapReduce that is able to automatically adapt cluster size to workload variations in order to guarantee the SLA. To this purpose, the SLA is translated into a utility function in an ad hoc manner. First, the following boolean expression is defined to reflect whether the service performance SLO is met at a given time $t$: $$PO(t) = \ell(t) \leq \ell_{max}$$ (1) where $\ell(t)$ is the average MapReduce job latency (i.e. response time) at time $t$, and $\ell_{max}$ is the maximum job latency not to exceed. Note that $\forall t, PO(t) \in \{0, 1\}$, depending on whether Eq. (1) holds or not. Then, the utility function combines both performance and cost (cluster size) objectives: $$\theta(t) = \frac{PO(t)}{\omega(t)}$$ (2) where $\omega(t)$ is the MapReduce cluster size at time $t$. Here, $\forall t, \theta(t) \in [0, 1]$. Intuitively, the MapReduce cluster with the highest utility is the one that guarantees the performance SLO (if possible) with minimal cluster size. Then, the MapReduce cluster is modeled following a queuing network approach, where each queue represents a cluster node and is modeled as an M/M/c queue. Here, client communication with the MapReduce service is modeled as a closed loop to reflect the synchronous communication model that underlies this service, that is a client waits for a request response before issuing another request. Moreover, multiple clients may concurrently request the service (i.e. execute MapReduce jobs). The model predicts the average request latency based on the monitored workload and service cluster size. The workload is defined as the number of concurrent clients and the average response time for the requests. Then, a capacity planning is applied to calculate the MapReduce cluster size with the highest utility, and to apply it to the online MapReduce service. The used model and capacity planning are adaptations from our previous work on Internet services to MapReduce services [16]. The model allows the capacity planning to find the exact number of nodes needed to guarantee the SLA, instead of adding/removing nodes one by one. Moreover, monitoring windows to measure the workload and to react to changes in the workload are ensured to be long enough to let the system stabilize after adding/removing nodes. ![Figure 6. Self-elastic MapReduce service](image) Figure 6 shows the results for the SLAaaS-oriented MapReduce PaaS running on Amazon EC2. MRBS [11] was used to stress the MapReduce service. The setup consists of a set of nodes hosting the MapReduce cluster, an additional node hosting MRBS emulating MapReduce clients, and another node for the cloud SLA controller. Among the different benchmarks provided by MRBS, the movie recommender system benchmark was used in this use case. It builds upon a set of movies, a set of users, and a set of ratings and reviews users give for movies to indicate whether and how much they liked or disliked the movies. A client can request the top-10 recommendations for him/her, all the ratings given to a movie, how much the client would like a movie, or all the ratings given by another client. Each request is randomly chosen, and after receiving the response, the client submits another request. Our experiments are based on the following set of real data: 1700 movies, 1000 users, 100,000 ratings [17]. The MapReduce service initially runs on a four node cluster, and the service is warmed up for 10 minutes with 5 clients and then, measures are taken during 65 minutes as shown in Figure 6. The service state (#clients and average response time) is monitored every minute and the capacity planning is executed every 3 minutes taking the average of the service states in the last 5 minutes. We vary the number of concurrent clients over time between 5 and 10 as shown in the figure. When additional clients access the service, client request response times increase until the SLA is violated at time 13 minutes. Nevertheless, the automatic self-elastic MapReduce service adapts and increases its capacity to guarantee the SLA again. Finally, when the workload decreases after minute 40, the automatic self-elastic service releases underused nodes. Thus, this experiment shows that SLAaaS successfully applies to an online MapReduce service to guarantee performance- and cost-oriented SLA. In this case study, we considered the resource cost metrics (i.e. #nodes) in the SLA. In a future work, we will consider the financial cost which relies on both resource cost and units of time (usually hours) during which the resource is used. C. SLAaaS-Oriented Multi-Tier Bookstore SaaS In this case study, we apply the SLAaaS model to the TPC-W online bookstore Software-as-a-Service [12]. This service follows a multi-tier architecture consisting of a front-end web tier and a back-end database tier. For scalability purposes, each tier may consist of many server instances. Intuitively, the higher the number of instances in each tier, the better the performance and availability of the service. However, the number of instances hosting a cloud service has a direct impact on the service cost, and actually depends on the current service workload. Figure 7 presents an example of SLA established between the multi-tier bookstore SaaS and its customers. This contract combines performance, availability and cost SLOs as follows: request response time should not exceed 500 ms and at least 95% of client requests should be served, with a number of instances hosting the service as small as possible. ![Figure 7. SLA for multi-tier bookstore SaaS in CSLA language](image) First, a utility function is drawn ad hoc from the SLA. The following boolean expression reflects whether the service performance SLO and the service availability SLO are met at a given time $t$: $$PAO(t) = \left( \ell(t) \leq \ell_{\text{max}} \right) \land \left( \alpha(t) \leq \alpha_{\text{max}} \right)$$ \hspace{1cm} (3) where $\ell(t)$ is the average request latency (i.e. response time), $\ell_{\text{max}}$ the maximum request latency not to exceed, $\alpha(t)$ the service availability (i.e. ratio of non-rejected requests), and $\alpha_{\text{max}}$ is the minimum service availability to be guaranteed. Note that $\forall t, PAO(t) \in \{0, 1\}$, depending on whether Eq. (3) holds or not. Then, the utility function combines performance, availability and cost (#nodes) objectives: $$\theta(t) = \frac{T \cdot PAO(t)}{\omega(t)}$$ \hspace{1cm} (4) where $\omega(t)$ is the number of nodes that host the multi-tier SaaS at time $t$, and $T$ is the number of tiers of the multi-tier service ($T = 2$ in TPC-W). $T$ is used in Eq. (4) for normalization purposes. Here, $\forall t, \theta(t) \in [0, 1]$, since $\omega(t) \geq T$ (at least one instance per tier) and $PAO(t) \in \{0, 1\}$. The multi-tier service is then modeled following a queuing network approach, where each queue represents a server replica and is modeled as an M/M/c/K queue, and the network of queues represents the series of tiers in a multi-tier service. The model predicts the client request latency... Figure 8. Self-elastic multi-tier bookstore service and service availability, based on the service workload, the multi-tier service size and the admission control level (MPL: Multi-Programming Level) usually applied on each tier of a multi-tier service. The workload consists of the number of concurrent clients, the average request response time and the visiting ratio (i.e. #requests in the back-end per request in the front-end). Then, the capacity of the multi-tier service that provides the highest utility (Eq. (4)) is calculated, and applied to the online service. Due to space limitations, the model and capacity planning algorithms are not detailed here, more information can be found in [16]. As for the previous case study, each run of the capacity planning finds the needed service size and it is ensured that the system stabilizes between two consecutive executions of the capacity planning. The experiments on the online multi-tier bookstore service were run in G5K I (see Table I), with a read-only version of browsing-mix, a workload specified by TPC-W. Figure 8 depicts the results considering the SLA given in Figure 7. The number of concurrent clients was varied from 50 to 500 and then to 50 again. The service state (#clients, average response time and ratio of rejected requests) is monitored every 5 seconds and the capacity planning runs every minute using the average of the service states in the last 2 minutes. Initially, the online service is composed of one instance for the web tier, and one instance for the database tier. There is also another node that runs the SLA controller. The SLA is violated when the number of concurrent clients increases to 500 (see Figure 8(a)), that triggers the reconfiguration of the cloud service creating two new instances in the database tier and adjusting the MPL as shown in Figures 8(c) and 8(d). Once the reconfiguration of the cloud service has been applied, the service is able to cope with the SLA requirements again (see Figure 8(a) and 8(b)). Finally, when the load decreases, the system is over-provisioned and some instances are released and the MPL adjusted accordingly. Therefore, the SLAaaS-oriented multi-tier service is able to successfully guarantee SLA despite workload variations. D. SLAaaS-Oriented Locking PaaS Locking allows to ensure exclusive access to shared resources by concurrent processes, and is usually provided as a Platform-as-a-Service in the cloud. For instance, Google provides the Chubby distributed locking mechanism that is used by other cloud services such as Google Filesystem service and Bigtable data storage service [18]. Such a mechanism provides a functional interface with operations to acquire or release locks, among others. However, locking procedures remain costly. Locking was identified as an important and poorly resolved problem [19]; these protocols have to be scalable and take into account QoS objectives. We apply the SLAaaS model to a locking PaaS to illustrate our proposed approach. Thus, a SLA is contracted between the locking service and its customer. Figure 9 gives an example of SLA that combines performance and availability objectives. The SLA specifies that the response time of a request to the lock service should not exceed 400 ms. It also specifies that the usage of the locked shared resource is kept as high as possible. This is translated ad hoc into a utility function: \[ \theta(t) = \frac{PO(t)}{\rho(t)} \] where \(PO(t)\) is given in Eq. (1), and \(\rho(t)\) is the use rate of locked resource. Intuitively, the locking service with the highest utility is the one that guarantees the performance SLO (if possible) with a high resource use rate, and therefore, the SLA. Then, we combine admission control techniques with a distributed locking algorithm in order to guarantee the SLA [20]. Thus, before accepting a request, the locking service controller first verifies that, taking into account current system state, the performance SLO can be satisfied. If so, the request for lock acquisition is accepted and will be served; otherwise, the request is rejected. Due to space limitations, algorithm details are not provided but can be found in [20]. In the present paper, we show how the locking algorithm is integrated with the SLaaS model. ![Figure 9. SLA for locking PaaS in CSLA language](image) We conducted experiments with our SLAaaS-oriented locking service, running in a 40 node cluster in the G5K II infrastructure (see Table I). To emulate long distance, we injected network latency between nodes. Each node runs a process that may request to acquire the lock on a shared resource. The load varies over time, and is characterized by the ratio of processes requesting lock acquisition to the total number of processes, as shown in Figure 10. Figure 10(a) presents lock request response time over time. When the load is low, the response time remains low compared to the SLO. When the load increases, there is more contention on the shared resource, with an increase of lock request latency. However, the locking service is able to automatically adapt to keep request latency below the threshold as specified by the SLA. This is obtained thanks to admission control. Figure 10(b) illustrates the use rate of the shared resource, i.e. how often the resource is actually locked and used by one of the processes. It shows the time ratio during which the resource is used by processes to the total time. In our network configuration this ratio cannot exceed 50% since half of time is spent in message transmission. Interestingly, when the load increases the locking service adapts to the load, with an increasing use rate until the maximum value, which corresponds to the availability objective of the underlying SLA. In summary, SLAaaS successfully applies to associate SLA with a locking PaaS. ![Figure 10. Self-adaptive locking service](image) **IV. RELATED WORK** Existing public clouds provide very few guarantees in terms of performance and dependability [2]. Amazon EC2 compute service offers a service availability of at least 99.95% [3], and Amazon S3 storage service guarantees a service reliability of 99.9% [3]. However, in case of an outage, Amazon requires the customer to send them a claim within thirty business days for Amazon EC2 and ten days for Amazon S3. Amazon cloud services do not provide performance guarantees or other QoS guarantees. Rackspace and Azure cloud services provide similar behaviors [4], [5]. Several recent research works consider SLA in cloud environments [21], [22], [23], [24]. Chhetri et al. propose the automation of SLA establishment based on a classification of cloud resources in different categories with different costs, e.g. on-demand instances, reserved instances and spot instances in Amazon EC2 cloud [21]. However, this approach does not provide guarantees in terms of performance, nor dependability. Macias and Guitart follow a similar approach for SLA enforcement, based on classes of clients with different priorities, e.g., Gold, Silver, and Bronze clients [22]. Here again, a relative best-effort behavior is provided for clients with different priorities, but neither performance nor dependability SLOs are guaranteed. Other works propose heuristics for SLA management [23], or target specific environments such as SaaS [24]. The former work provides best-effort without strict guarantees on SLA, and the latter does not tackle the many types of clouds. Regarding the specification of SLA, some initiatives contributed to this effort, such as WSLA [7], and WS-Agreement [25]. The proposed CSLA language shares motivations with these projects and goes further by taking into account high cloud elasticity and QoS instability. Its general concepts were introduced in [6]; in the present paper we describe its integration with the SLAaaS model and its application to real cloud services. V. CONCLUSION This paper presents SLA-aware-Service (SLAaaS) cloud model, for a systematic and principled way to integrate quality-of-service (QoS) and service level agreement (SLA) into the cloud. The CSLA specific language is proposed to describe SLAs associated with cloud services in a convenient way. A control-theoretic approach is followed to provide performance, dependability and cost guarantees for online services. Our experiments on online cloud services through various case studies successfully demonstrate the usefulness of SLAaaS. While this paper illustrates SLA with QoS metrics such as client request response time, availability, resource usage and resource cost, we believe that the proposed model and control approach may apply to other metrics, such as service throughput, and energetic cost. This work opens interesting perspectives in terms of cooperative clouds and cooperative SLAs. We hope that such a model will lead to more principled, less ad-hoc solutions of cloud QoS and SLA management. ACKNOWLEDGMENT This work was supported by the French National Agency for Research (ANR), under the MyCloud project (ANR-10-SEGI-0009, http://mycloud.inrialpes.fr/). Part of the experiments were conducted on the Grid’5000 experimental testbed (https://www.grid5000.fr/). REFERENCES
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-00780000/file/IEEE-CCGrid-2013%281%29.pdf", "len_cl100k_base": 7635, "olmocr-version": "0.1.49", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 30487, "total-output-tokens": 9676, "length": "2e12", "weborganizer": {"__label__adult": 0.0003495216369628906, "__label__art_design": 0.0005965232849121094, "__label__crime_law": 0.0005183219909667969, "__label__education_jobs": 0.0011053085327148438, "__label__entertainment": 0.00021219253540039065, "__label__fashion_beauty": 0.0002160072326660156, "__label__finance_business": 0.0014829635620117188, "__label__food_dining": 0.0004267692565917969, "__label__games": 0.0005865097045898438, "__label__hardware": 0.001750946044921875, "__label__health": 0.0009336471557617188, "__label__history": 0.00043487548828125, "__label__home_hobbies": 0.00012254714965820312, "__label__industrial": 0.0006260871887207031, "__label__literature": 0.0005578994750976562, "__label__politics": 0.000457763671875, "__label__religion": 0.00045561790466308594, "__label__science_tech": 0.430419921875, "__label__social_life": 0.00015151500701904297, "__label__software": 0.04205322265625, "__label__software_dev": 0.51513671875, "__label__sports_fitness": 0.0002160072326660156, "__label__transportation": 0.000690460205078125, "__label__travel": 0.00024890899658203125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38623, 0.02735]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38623, 0.11184]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38623, 0.88689]], "google_gemma-3-12b-it_contains_pii": [[0, 1168, false], [1168, 6119, null], [6119, 11481, null], [11481, 15477, null], [15477, 20796, null], [20796, 25686, null], [25686, 28563, null], [28563, 32596, null], [32596, 38623, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1168, true], [1168, 6119, null], [6119, 11481, null], [11481, 15477, null], [15477, 20796, null], [20796, 25686, null], [25686, 28563, null], [28563, 32596, null], [32596, 38623, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38623, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38623, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38623, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38623, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38623, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38623, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38623, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38623, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38623, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38623, null]], "pdf_page_numbers": [[0, 1168, 1], [1168, 6119, 2], [6119, 11481, 3], [11481, 15477, 4], [15477, 20796, 5], [20796, 25686, 6], [25686, 28563, 7], [28563, 32596, 8], [32596, 38623, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38623, 0.12821]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
7865c19328ee75229bf822cbddbb87141d915d5f
Social sensing: when users become monitors Conference or Workshop Item How to cite: For guidance on citations see FAQs. © 2011 ACM Version: Accepted Manuscript Link(s) to article on publisher’s website: http://dx.doi.org/doi:10.1145/2025113.2025196 Copyright and Moral Rights for the articles on this site are retained by the individual authors and/or other copyright owners. For more information on Open Research Online’s data policy on reuse of materials please consult the policies page. Social Sensing: When Users Become Monitors Raian Ali, Carlos Solis, Mazeiar Salehie, Inah Omoronyia Lero - University of Limerick, Ireland Bashar Nuseibeh Lero-University of Limerick, Ireland The Open University, UK Walid Maalej Technische Universität München, Germany ABSTRACT Adaptation requires a system to monitor its operational context to ensure that when changes occur, a suitable adaptation action is planned and taken at runtime. The ultimate goal of adaptation is that users get their dynamic requirements met efficiently and correctly. Context changes and users’ judgment of the role of the system in meeting their requirements are drivers for adaptation. In many cases, these drivers are hard to identify by designers at design time and hard to monitor by the use of exclusively technological means by the system at runtime. In this paper, we propose Social Sensing as the activity performed by users who act as monitors and provide information needed for adaptation at runtime. Such information helps the system cope with technology limitations and designers’ uncertainty. We discuss the motivation and foundations of Social Sensing and outline a set of research challenges to address in future work. Categories and Subject Descriptors D.2.2 [Software Engineering]: Design Tools and Techniques D.2.3 [Software Engineering]: General Terms D.3.4 [Software Engineering]: Requirements Engineering; Social Software Engineering; Models at Runtime; Adaptive Software Engineering. 1. INTRODUCTION Self-adaptive systems are increasingly expected to cope with the volatile nature of the environment in which the system operates. Different categories of environmental changes trigger different categories of responses [1]. For example, security breaches and attacks could trigger certain self-protection actions, or changes in the available resources could trigger self-optimization actions. The ultimate goal of this so-called self-* computing paradigm is that users’ dynamic requirements are met efficiently and effectively, and adaptation is done autonomously by the system so that computing transparency is maximized and humans’ (designers and users) effort is minimized [2]. The adaptation loop [3] consists of monitoring changes in the system operational environment, analysis of changes, planning an action, executing it, monitoring back the effects, and so on. Focusing on the monitoring stage, the system should monitor its context, i.e. the state of the environment in which it operates [4]. Moreover, the system has to monitor if its executed actions were performed successfully. Self-healing deals with incorrect execution in a way that allows a system to handle faults and errors autonomously. However, the technical correctness of system execution (bug-free, no connection errors happens, etc.) does not necessarily mean that users’ requirements are met [5]. For example, sending an invitation to a meeting can be done via one of two system alternatives: by SMS or email. A successful sending of an invitation to a meeting via email does not necessarily mean that the invitee was notified on time as the invitee might miss the email or misinterpret it. That is, monitoring should primarily be concerned with determining if users find the system execution a valid and effective way for reaching their requirements, and adaptation should respond to how users judge each system execution against the meeting of their requirements. Monitoring context changes and the quality of each system alternative is not always achievable with the use of solely technological means and might require users to collaborate with the system. For example, in a driver-assistant system, the traffic level in the area is a context attribute that affects to which park the system should guide the driver. Such context might be un-monitorable due to the lack of necessary infrastructure. As a solution, the system could rely on the information obtainable through the drivers’ community in that area. The system could have different alternatives to interact with a driver while assisting him (voice commands, maps, street view, etc.). For instance, a quality attribute such as “readability” could be judged differently in different contexts for each of these alternatives. However, neither the designers at design time nor the system at runtime can decide with certainty how the drivers judge “readability” for each alternative. As a solution, drivers may be asked to provide such quality judgments at runtime after an alternative is executed. Besides monitoring the values of context attributes and quality attributes, users could also be involved in identifying such attributes. Users act as monitors to decide relevant context and quality attributes to add to the design of the system and irrelevant ones to remove from it as well. For example, drivers might add “straightness of the road” as a context attribute which influences the quality of each interaction alternative (voice command, map, street view, etc.) against the quality attribute “readability”. Moreover, drivers might add “minimum noise” as a relevant quality attribute, which the designers did not consider when designing the system, so that each system alternative is also qualified against it. Thus, users are also monitors for identifying drivers for adaptation, i.e. mainly context and quality attributes. Maalej et al. [6] discuss how to make the user’s involvement a first order concern in software projects, moving from a transactional to a social engineering process. In line with this view, the involvement of users can also be done at runtime as an integral part of the system operation and not only the engineering process. Ali et al [7] propose to weave together the variability of context and the space of alternatives designed to reach the requirements. However, context is presumed monitorable by the system at runtime and the relation between context and alternatives is specified under certainty. These two design assumptions are hard to achieve in certain systems, which might need humans to monitor context and its influence on the activation, adoptability, and quality of each system alternative. In this paper, we propose Social Sensing as a system development technique which involves users, at runtime, in the monitoring activity of the adaptation loop. The goal is that limitations of technological devices as well as uncertainty and incompleteness of the system design are faced via the involvement of users’ perception as an integral part of the system monitor. Social Sensing treats users as a primitive component of the system instead of pure consumers of its functionalities. We discuss Social Sensing foundations in Section 2, list research challenges in Section 3, and conclude the paper in Section 4. 2. SOCIAL SENSING: FOUNDATIONS Social Sensing is based on exploiting users’ perception as an integral part of the computation. The system relies on the users’ community to get information which is un-monitorable by automated means and/or unspecifiable under certainty by designers at design time. The users play the monitor role and provide input to the system so that the right decision and response will be planned and enacted during the operation. This is particularly important when dealing with systems involving a community of users. For example, when volunteer drivers provide context information, e.g., the traffic level in a specific area, other drivers will benefit from it when the system executes for them. The information provided by the volunteer drivers about the quality of a system behavior, e.g., the comfort level of each interaction technique for guiding a driver, is the main ingredient for the collective judgment of the drivers’ community about each alternative so the system can act accordingly. We discuss Social Sensing in the context of adaptive systems. In such systems, the context monitoring as well as the validity and quality of system alternatives are essential to guide adaptation. Moreover, we focus on the problem space rather than the solution space taking the users’ satisfaction about the role of system in meeting their requirements as the main goal of adaptation. Social Sensing advocates that users can play a role in establishing the monitoring process. Users understand the system as a means to solve their problems and can collaborate with it as monitors providing information using their own terms, which belong to the problem domain (requirements, quality, context, validity, etc.) not the technical solution domain (bug, error, protocol, proxy, etc.). Thus, one of the ideal domains of Social Sensing is requirements-driven adaptation. In the rest of this section, we discuss the meta-model of this domain (represented in Figure 1) as a baseline for our Social Sensing method. Variability is the cornerstone for adaptation. A system provided with only one alternative is unable to adapt when context changes. A system alternative is a synthesis between automated and human activities intended to reach certain requirements. In adaptive systems, a requirement could be reached via different system alternatives. For example, considering the driver-assistant system, the system could have two main system alternatives “guide to a public park” and “guide to a paid park”. The interaction with a driver for guiding him to a suitable park can be also achieved via different alternatives such as voice commands, an interactive map, or a street view. Adaptation is seen as the selection of the system alternative which best fits to the current context. The fitness of a system alternative is measured via both its validity as a means to reach the requirements and its quality degree as well. Figure 1 Meta-model of Requirements-driven Adaptation Artifacts The validity of a system alternative is a binary property referring to its success/failure in reaching the requirement it is intended for. For example, using the guidance of the driver-assistance via one alternative, the driver either reaches (valid) or does not reach (invalid) a free parking place. The quality of a system alternative is captured via a number of quality attributes each representing a distinguished characteristic of the degree of excellence of an alternative. For example, the quality of each way of interacting with a driver could be refined to “readability”, “fast”, or “less distraction”. The assessment of a system alternative against a quality attribute could fall into a designated scale (e.g., [very poor, poor, acceptable, good, very good]), or [low, medium, high]). The validity and the quality of the operation of a system alternative in the past are main factors to consider when adaptation is planned so that the best alternative will be selected and applied. The validity and the quality of a system alternative are context-dependent. Context is represented via context attributes, each one representing a distinguished characteristic of the environment in which the system operates, e.g., driving speed, driver age, traffic level, etc. Certain context attributes might influence the validity of a system alternative and/or its quality against certain quality attributes. For example, suppose the following context attributes (Driver is in a hurry, the distance to the public park is far, Traffic level is high) then most probably the system alternative “guide to public park” is invalid. Certain context attributes might influence the quality assessment of a system alternative against certain quality attribute. For example, the level of driving experience, the complexity of the road and the traffic level in the area are context attributes which influence the assessment of each alternative of communicating with a driver against a quality attribute like “less distraction”. Social Sensing plays a major role within the above settings and is characterized by the following four distinct contributions: 1. **Context values.** Users play a role in obtaining values of context attributes that affect the validity and quality of system alternatives, which are not monitorable for reasons such as limitations or failure of technology, lack of infrastructure, etc. Using these values, a system can decide applicable alternatives by analyzing the history of each alternative in similar values of context in the past. As a result, the alternative which best fits the current values of context will be applied. For example, a context attribute like “there is an accident in a certain area” may not be monitorable by the driver-assistant system due to the lack of access to the official traffic management system or because no such system exists. Thus when volunteer drivers passing close to the accident’s location provide such information, the system will benefit from it for guiding other drivers. 2. **Quality and validity assessment.** Uncertainty is inherent when designing a system. The validity of a system alternative and its quality assessment against each quality attribute is not always decidable under certainty by designers at design time. In Social Sensing, the users play the role of monitors of the validity and quality of each system alternative. For example, whether guiding a driver with medium driving experience via an interactive map is a valid interaction method, is unknown unless the system operates in practice and drivers themselves decide that. Moreover, designers might not be able to decide the quality of guiding a driver, who is familiar with the area, via voice commands against the quality attribute “less distraction”. Moreover, validity and quality are not static properties. What is known to be a valid and high-quality alternative at one point in time may lose these characteristics as time passes. Social Sensing allows for a continuous evaluation of the system alternatives by involving the users’ community. For example, “voice recognition” might be judged as low quality interaction alternative compared to a quality attribute such as “ease of use” by drivers. In the future, when the drivers become more familiar with this technology, their judgment of its quality might be different. Social Sensing allows for capturing changes in the users’ community judgment of the system alternatives so that adaptation is up-to-date. 3. **Context attributes identification.** Uncertainty concerns also the identification of the context attributes which affect the validity and the quality of each system alternative. Designers might be uncertain if their identification is correct and complete. Social Sensing allows users to act as designers while the system is operating, by dropping context attributes that they judge to be irrelevant and adding others which they believe to be relevant for the validity and the quality of each system alternative. In Social Sensing, users can engage with this process throughout the life of a system. This is essential to cope with the fact that relevance itself is not a static property and what is judged to be relevant at the moment might become irrelevant in the future, and vice versa. For example, unlike the designers’ specification, the drivers’ community might identify “the existence of a staff assistant” as a relevant context attribute that affects the quality attribute “reliability” of the system alternative “guide to paid parking”. However, this attribute may turn out to be irrelevant when the drivers’ community becomes more competent about the use of new technology and trusts it more. Moreover, the designers might specify that “the existence of traffic lights inside the park” is a context attribute which affects all alternatives against the quality attribute “less distraction”. On the other hand, this decision might be seen as a wrong one by the drivers’ community and they may decide collectively to drop this context attribute and consider it irrelevant. 4. **Quality attributes identification.** Similarly to the above discussion about context attributes, designers might miss quality attributes which the users’ community finds relevant. Also, designers might include quality attributes that may be deemed irrelevant by the user community. Social Sensing gives users a voice and allows them to be a part of the decision making team. It allows them to continuously play the role of monitor to decide relevant quality attributes to add and irrelevant ones to drop when appropriate. For example, “reduced pollution” might be considered by the drivers’ community as a relevant quality attribute when evaluating each system alternative so that the system might choose park place that is not ideal in terms of time and effort required to reach it but good for reducing the pollution in the area. Thus, if the drivers’ community decides that this attribute is relevant, it will be added to the list of quality attributes defined initially by the designers. Moreover, the users’ community might drop some attributes from that list if they are found to be irrelevant. For example, in a city where traffic is often low and the need to reduce pollution is not critical, the drivers’ community might collectively decide to drop the attribute “reduced pollution”. Social Sensing allows users to express their opinion so the system analyzes it and takes decisions which reflect the collective intelligence of the users’ community. The information provided by the users’ community at runtime is a main ingredient for planning and enacting adaptation. On the one hand, it helps the system to cope with the limitations of the technological means of monitoring the environment and the uncertainty and incompleteness in the designers’ decisions. On the other hand, it allows the users to drive the adaptation and maximize its correctness so that their requirements are reached in the best available way when changes happen. 3. **RESEARCH CHALLENGES** While Social Sensing is powerful for crowd-sourcing users and enabling them to act as monitors, it brings several software engineering challenges. 1. **Users’ subjectivity.** Social Sensing relies on the existence of a certain degree of similarity in the perception of different users. That is, Social Sensing requires that the perception of users of the values and relevance of the adaptation drivers (context and quality attributes, etc.) are similar. However, this is not always the case and users might perceive adaptation drivers subjectively. For example, the value of a context like “traffic level” could be monitored by one driver as “medium” and by another as “high”. The same subjectivity could arise when assessing the system alternatives against a quality attribute. Devising methods and analysis mechanisms to normalize the different users’ perception is a challenging problem of Social Sensing. 2. **Trust management.** Social Sensing requires users to provide information and thus implies dealing with trustworthiness of information and users. The benefits of the openness-to-the-crowd might be sacrificed if untrusted users, who might intentionally or unintentionally cause harm to the system or misuse it, are not detected and dealt with. Moreover, users need to trust the system itself before collaborating with it. Developing systems that adopt Social Sensing and are able to inspire users’ trust is another socio-technical challenge, and achieving such trust has to be engineered as a first class requirement of the whole system. 3. **Security and Privacy.** Depending on the criticality and sensitivity of monitored information, security goals such as confidentiality, integrity, and availability might become concerns in Social Sensing. For example, Social Sensing might not be ideal for a driver-assistant system for an ambulance that is typically assigned to critical missions, unless information provided by the driver’s community in the area is strictly verified and secured. Moreover, while Social Sensing relies on crowd-sourcing a large number of users, it also opens the door for malicious users to attack the system. For example, some drivers might provide wrong information that leads to less traffic in the areas where they drive. Furthermore, it is notable that some of the security requirements in Social Sensing could be in conflict with other categories of requirements such as privacy ones. For example, if a driver refuses to provide his location for privacy reasons, the system might not be able to help him avoid traffic, thereby making the main system service practically unavailable. 4. **Transparency.** An important goal of adaptation is to minimize humans’ (users and designers) effort and maximize computers’ transparency. Social Sensing implies the intervention of users as monitors and thus users are required to provide input not necessarily used for their own immediate benefit. This means that Social Sensing, if not designed effectively, may provide adaptation capabilities for one group of people while potentially violating adaptation of another. For example, the evaluation of a system alternative may be provided by a user after the operation terminates, so that the system benefits in next operations executed for benefit of different drivers. Devising mechanisms to encourage users to act as monitors and feel some gain by doing this task is therefore a research challenge to address. 5. **Volatility.** The validity of information provided by users, especially context changes, is volatile. Context may change rapidly so that information, which was true when the users provided it, might become false when the system starts to plan and enact adaptation. Social Sensing design has either to deal with this volatility or to avoid taking decisions based on information having highly volatile validity. For example, when the car needs to be refueled, the driver-assistant activates the requirement “guide the driver to filling station”. When the system receives information from other drivers that there is a filling station close to the driver location and starts to plan and execute adaptation (notifying the driver, getting his confirmation, choosing the right interaction method, etc.), the driver would have passed the station. That is, the system has to deal with the liveness of sensed information. 6. **Implementation.** There are major challenges regarding the implementation of Social Sensing. These challenges include the way to represent context and quality attributes and the values and judgments provided by users, the way to capture this information efficiently and independently from the applications, the decision about what information to collect exactly and how long this data should be stored, etc [6]. Moreover, involving users in dealing with large volume of information might compromise the applicability of Social Sensing. For example, the list of quality attributes provided by the users’ community could increase to an extent where users find it tedious to assess a system alternative against all attributes included in it. This means that the system might need iterative maintenance so that applicability is not sacrificed. Ideally, the system has to help designers when maintaining the system by pointing out loci where designers need to fix errors or take some other altering actions. 4. **CONCLUSIONS AND FUTURE WORK** In this paper, we proposed Social Sensing as a system development technique in which users’ perception is part of the system computation. We advocated that users are a powerful source for information that drives adaptation. In Social Sensing, users act as monitors, increasing the ability of the system and designers for capturing the values and the relevance of certain adaptation drivers. Out of these drivers, we discussed context, quality and validity. Social Sensing implies a direct interaction with users. Thus, users interact using their own terms, i.e. their problem domain terms, and this explains the focus of our discussion of Social Sensing from a requirements engineering perspective. Our future work includes developing a methodology (models, development process, analysis techniques, and a software framework for Social Sensing) for incorporating the role of users in the design of requirements at runtime. Our ultimate goal is to develop capabilities that make Social Sensing viable and useful from two perspectives. First, the users’ interaction with the system should be facilitated and the awareness of users about the consequences and the benefits of their interaction should be maximized. In other words, engineering the awareness of users and facilitating and encouraging their collaboration with the system represent the first main thread of research we plan to conduct. Second, the system has to be provided with analysis techniques to process the information gathered from its users’ community and make use of it at runtime. These include deciding about the significance of information provided by users and formulating the community's collective judgment, autonomously or with a minimum intervention of designers. ACKNOWLEDGMENTS. This work has been partially funded by the EU Commission through the FastFix project, and by Science Foundation Ireland grant 10/CE/I1855. We also thank Vinny Cahill, Siobhán Clarke and Gavin Doherty for the discussions which enriched the idea presented in this paper. 5. **REFERENCES**
{"Source-Url": "http://oro.open.ac.uk/32322/1/2011-Ali-Social_Sensing.pdf", "len_cl100k_base": 4831, "olmocr-version": "0.1.53", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 14822, "total-output-tokens": 5465, "length": "2e12", "weborganizer": {"__label__adult": 0.0003345012664794922, "__label__art_design": 0.0004227161407470703, "__label__crime_law": 0.00030422210693359375, "__label__education_jobs": 0.0013685226440429688, "__label__entertainment": 7.975101470947266e-05, "__label__fashion_beauty": 0.00014901161193847656, "__label__finance_business": 0.0002872943878173828, "__label__food_dining": 0.0003325939178466797, "__label__games": 0.0006237030029296875, "__label__hardware": 0.0006785392761230469, "__label__health": 0.0005369186401367188, "__label__history": 0.0003008842468261719, "__label__home_hobbies": 7.56382942199707e-05, "__label__industrial": 0.0003209114074707031, "__label__literature": 0.00045013427734375, "__label__politics": 0.00030875205993652344, "__label__religion": 0.00038504600524902344, "__label__science_tech": 0.04058837890625, "__label__social_life": 0.0001550912857055664, "__label__software": 0.0126953125, "__label__software_dev": 0.9384765625, "__label__sports_fitness": 0.00023508071899414065, "__label__transportation": 0.0007805824279785156, "__label__travel": 0.0002008676528930664}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27036, 0.02634]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27036, 0.53819]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27036, 0.92852]], "google_gemma-3-12b-it_contains_pii": [[0, 798, false], [798, 6984, null], [6984, 13635, null], [13635, 20691, null], [20691, 27036, null]], "google_gemma-3-12b-it_is_public_document": [[0, 798, true], [798, 6984, null], [6984, 13635, null], [13635, 20691, null], [20691, 27036, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27036, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27036, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27036, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27036, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27036, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27036, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27036, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27036, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27036, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27036, null]], "pdf_page_numbers": [[0, 798, 1], [798, 6984, 2], [6984, 13635, 3], [13635, 20691, 4], [20691, 27036, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27036, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
89f7a797590e6ade250ab671dfbbb6af2b723ab1
Multi-Resolitional Knowledge Representation Using Prototypes and Properties Jeffrey Berliner, Michael Thome, Daniel Cerys BBN Technologies 10 Moulton St Cambridge, MA 02138 berliner@bbn.com mthome@bbn.com cerys@bbn.com Abstract—In the course of developing a distributed logistics command and control application based on the Cougaar agent architecture (http://www.cougaar.org/), we were faced with a large knowledge representation problem. The logistics agents needed to reason about many thousands of different types of logistics assets, where each asset had hundreds of attributes. The problem is factorable, however, in that each specialized logistics agent needs to know only a relatively small amount of information about only a relatively small number of assets. By using techniques of prototypes and delegation, adapting the paradigm suggested by Lieberman [1] and others [2, 3], we developed a logical data model (the Cougaar LDM) for logistics that effectively factors this representation problem. This factoring provides a basis for multi-resolitional representations of the entities in the logistics system; the information about any entity at each logistics agent can be limited to only that subset in which the agent has interest. 1. INTRODUCTION This paper describes the design and implementation of a mechanism for multi-resolitional representation of entities and concepts across a distributed set of agents. This mechanism is the basis of the Logical Data Model (LDM) component of the Cougaar Cognitive Agent Architecture [4, 5]. Our motivation stemmed from the scope of the application domain we were facing: the design and development of a logistics command and control application spanning a large number of distributed organizations, each with diverse sets of equipment, facilities, and personnel, and performing a wide variety of operational and logistics support tasks. The scope of the application domain led us to an architecture based on a distributed set of agents, each of which represented a functional organizational entity. The scope and variety of the information required to describe each of the entities and concepts in this domain led us to a knowledge representation design based on prototypes and delegation as suggested by Lieberman [1], Stein et al [2], Taivalsaari [3], and others. Consideration of the detailed, but often diverse, information requirements for each agent, and the requirement to introduce additional agent types and different reasoning capabilities as the system matured, led us to extend the delegation/prototype design to a multi-level delegation mechanism using properties and behaviors which are dynamically bound to the prototypes. This multi-level delegation provides an efficient, natural mechanism for multi-resolitional knowledge representation throughout our system. As agents need to communicate about logistics entities, prototypes may be sent between the agents or constructed as needed, thus providing control over which properties and behaviors are transmitted or instantiated at each agent. In this way, the information about any entity at each logistics agent can be limited to only that subset in which the agent has interest. This paper describes the issues arising in the problem domain that motivated this work, the principles of the design which address these issues, some details of the design and implementation, and examples of usage patterns that exploit the benefits of the design. 2. PROBLEM DESCRIPTION AND ISSUES A key aspect of logistics planning and execution, involves reasoning about things, their properties, their relationships, and the activities in which these things participate. The things under consideration may generally be considered assets of one sort or another, and include equipment (e.g., vehicles, machinery, electronic devices, weapon systems, etc.), materiel (e.g., fuel, machine parts, food, etc.), facilities (e.g., transportation networks, airports, sea ports, repair depots, warehouses, hospitals, etc.), organizations (i.e., civil, military and commercial organizations), and individual people. In order to construct a computer system which participates in logistics planning and execution, it is... necessary to have a mechanism to represent all the properties of assets required for logistics reasoning. Several aspects of these assets and their use in logistics systems make the problem hard: - The set of asset types and asset properties is very large. These asset properties must describe the forms and functions of each asset required for logistics reasoning. - For equipment assets, properties include such things as physical characteristics, functional performance characteristics, environmental requirements, reliability and reparability characteristics, and relational properties such as component specifications and consumable requirements. - For facility assets, the properties include such things as physical and geographic characteristics (e.g., location, physical extent), functional capabilities, equipment and staffing characteristics and requirements. - For organization assets, the properties include such things personnel membership and skill requirements, equipment ownership and requirements, functional capabilities, and organization relation properties (e.g., superior-subordinate relations, and provider-consumer relations). - The set of assets and properties evolves continuously over time as new models and types of equipment and materiel are continuously introduced and modified, and older ones are retired. - Reasoning must be done over a range of granularities since varied amounts of detail are required at different locations, echelons and stages in the logistics planning and execution processes. - Early in a planning process, it might be known that about 4,000 people are to be deployed to a disaster site, and that they will need to provide food, water, electricity and shelter for about 90,000 people. - Later, it might be known that a set of mobile, diesel powered, electric generators capable of producing an aggregate of 2.5 Megawatts of electricity will be deployed. - Still later, it might be known that three of the generators are MEP-208A, 750 kW, 50-60 Hz, skid-mounted, Diesel Engine-Driven generator sets (NSN: 6115-00-450-5881), which have fuel tank capacities for approximately 8 hours operation at rated load.[6] - Finally, it might be known that one of the MEP-208A generators is Serial Number A2709B, and that this particular generator has been de-rated and limited to 600 kW until its next major overhaul. - Different portions of a logistic planning and execution system require different granularities of specialized knowledge. For example, each participating organization knows all the details of its operation, as well as the details of its interactions with customers and providers. However, it does not need to know or manage irrelevant information which is only required by these other organizations. - An emergency relief organization in operations following a hurricane disaster coordinates diverse operations including debris removal, emergency power generation, emergency roofing repair, and food water and ice supply and distribution. The logistics agents supporting its operation need to reason about the requirements for these services and the overall capabilities and availability of the providers of these services. - A prime power engineering organization operates and manages its power generation equipment, trains its personnel, and coordinates with the managing relief organization, with transportation providers to get to the disaster site, and with fuel supply providers. - A fuel distribution organization manages contracts and orders with its customers and suppliers. It manages fuel inventories, storage facilities, a fleet of distribution trucks, and a set of personnel with varied skills. Though it provides fuel for the electric generators, it reasons only about the fuel delivery requirements specified by the owner of the generators. - An equipment transportation organization manages contracts and orders with its customers and facility operators. It manages a fleet of transportation vehicles (trucks, aircraft and/or ships), and a set of personnel with varied skills. Since it provides transportation for the electric generators, it is required to reason only about the transportability of the generators: weight, physical dimensions, shock and rough handling limitations, etc. 3. DESIGN PRINCIPLES In order to achieve the factoring necessary to cope with these aspects of large logistics systems, we established some basic principles for the design of our logistics knowledge representation. - Assets are primarily modeled based on their properties rather than a hierarchical representation of what they are. It does not matter whether a towed electric generator is a generator or trailer, as long as it has the properties of a trailer, and the properties of an electric generator. (Similarly, it does not matter whether a tank is a vehicle or a weapon, as long as it has the properties of a vehicle and the properties of a weapon.) - Related properties are collected in property groups. Experience has shown that attributes often occur naturally in groups, and are required by reasoning agents in these groups. For example, Physical Attributes such as Length, Width, Height, Mass, etc. are used together (e.g. when planning to pack items inside a container) and change together (e.g. when an asset is physically modified). - Specialized property groups known as *behavior groups* encapsulate behavior about their specific properties (i.e., *methods* using object-oriented terminology). For example, an inventory behavior group encapsulates an inventory management algorithm for a fuel storage tank, and a fuel consumption behavior group encapsulates the fuel consumption characteristics for a vehicle. - *Asset* instances derive most of their properties from *prototype* instances. This greatly reduces the number of classes required to represent the logistics domain, and allows new types of assets to be defined and created dynamically. - The class of an asset prototype determines the property groups which must be present in each prototype instance. This provides regularity in the normal properties of related things and allows convenient programming access to these property groups. For example, all CargoTruck prototypes have the ContainProperty. However, it not required that the class of an asset’s prototype at one agent be the same as the class of that asset’s prototype at another agent. This allows different agents to reason about the same assets from different perspectives. Further, because an arbitrary number of property groups can be added to an instance/prototype (see next bullet), the specific class of an asset should only be relevant to the creators of the objects (encouraging them to provide values for all required property groups). - A prototype instance may include additional property groups. This provides flexibility in extending the properties of special types of assets. For example, a Truck with a hoist mounted on it can have the LiftProperty. In fact, it is not required that the class of an asset instance match the class of the asset’s prototype. - An asset actual instance may refer to specialized property groups which differ from the prototype. This provides flexibility in specializing the properties of particular instances of actual things. For example, a particular electric generator may have a degraded power generation capability. 4. APPLICATION TO LOGISTICS COMMAND AND CONTROL Given these principles which factor knowledge of logistics properties and behavior, we have been able to allow the detailed asset representation of any particular asset to differ depending on the details of the asset, the granularity of the processing, and the perspective or interests or needs of the using agent. Thus, the instantiated aspects (properties or attributes) of an asset change as references to that same asset move throughout the society. Continuing the mobile electric generator example, the following are a few examples of agent roles and the corresponding property groups of interest for a mobile generator asset. An emergency electric power generation company, which owns and operates sets of mobile generators and power distribution systems, needs to know such things as the requirements for power generation, where its generators will be situated, how much fuel will be available for the generators, and when its generators will be out of service for scheduled maintenance or repair. A transoceanic shipping company, which has been contracted to ship the generators, needs to know the physical weight and dimensions of the generators, as well as the shock and rough handling limitations and the temperature and moisture limitations of each generator in its shipping configuration. A generator repair shop needs to know what is wrong with the generators they have been contracted to repair and what resources (parts, repair equipment, mechanical skills, time) will be required to repair each generator. 5. ACHIEVING MULTI-RESOLUTIONAL KNOWLEDGE REPRESENTATION To achieve this multi-resolutional representation in the Cougaar LDM, an asset is represented abstractly as an object with an instance identifier (UID), a set of property groups (with property-value pairs), behavior groups (with computational methods and possibly additional property value pairs), and a pointer to its prototype. Calls to property accessors on the instance object will first check the local property groups and behavior groups, then will delegate calls to the prototype object. Typically, the prototypes have the majority of the property groups and behavior groups; only very specialized, unique assets instances are likely to have their own. Although we haven’t emphasized it in our implementation, prototypes can delegate to other, more abstract prototypes. Because of this, delegation is a recursive operation that traverses the complete prototype chain. Assets may be transferred between agents by sending the asset’s UID, its collection of property groups and the UID of the prototype. On the receiving side, a new object is constructed representing the asset, and then prototype UID is resolved. The prototype resolution process first checks to see if the prototype is already known, if not, it uses a local prototype factory to attempt to construct a prototype object from local resources. Depending on the agent’s particular data requirements, the factory will construct a prototype object containing only and exactly those property groups (and properties) needed for local data processing. Some such factories will not provide any details, effectively rendering the asset un-proprietary while passing through the agent. The effect is that the object’s shape (its associated properties) changes radically as it moves through the network of agents. Object aggregates may be formed by creating a new asset instance (e.g., a Box) with a key to a manifest list. The aggregate instance may then be handled as a simplified opaque object (e.g., the box’s length, weight, etc.) as it moves through the agents which do not care about the details of the contents. When the aggregate is unpacked, the manifest may be de-referenced by retrieving the original asset list from the agent which performed the aggregation. 6. EXAMPLES OF ASSETS, PROTOTYPES, PROPERTY GROUPS, AND DELEGATION Figure 1 shows a set of typical Cougaar LDM Classes, illustrating how trucks and generators would be represented. - The Asset class is used to represent any asset. It always has either an attribute TypeIdentificationPG which is always present for prototypes, and which refers to an instance of the TypeIdentificationPG class, or an ItemIdentificationPG, which is always present for asset instances and which refers to an instance of the ItemIdentificationPG class. It also has an attribute OtherPG, which contains a list of all other Property Groups and Behavior Groups which describe a particular asset prototype. For maximum generality, code written to deal with assets, should in general not test whether a particular asset instance or prototype is an instance of any class other than Asset. These other, specialized classes, such as Truck, are intended strictly to be for implementation optimization only. - The TypeIdentificationPG class specifies the identity of the type of asset being represented. It has attributes typeIdentification which typically specifies an NSN (National Stock Number), altTypeIdentification which typically specifies some other domain-specific identifier, and nomenclature, which provides a human readable descriptive string. - The ItemIdentificationPG class specifies the identity of a particular asset. It has attributes ItemIdentification, which specifies a user-specific identifier, and nomenclature, which provides a human readable descriptive string. - The Truck class is used to represent asset types which are trucks. It is a subclass of Asset, and it is guaranteed to have additional attributes such as PhysicalPG, TransportabilityPG, GroundSelfPropulsionPG, FuelConsumptionBG, and ContainPG. - The ElectricGenerator class is used to represent asset types which are generators. It is a subclass of Asset, and it is guaranteed to have additional attributes such as PhysicalPG, ElectricGenerationBG, and FuelConsumptionBG. - The PhysicalPG class specifies basic physical properties of assets, including length, width, height, footprintArea, volume, and mass. - The TransportabilityPG specifies attributes of specific importance to transporters, such as air transportability, protective housing, and shock and vibration limitations. - The GroundSelfPropulsionPG class specifies properties associated with self-propelled ground vehicles. Its attributes include tractionType (Wheeled, tracked, etc.), engineType, maxSpeed, cruiseSpeed, etc. - The FuelConsumptionBG class specifies the normal and alternate types of fuel, and provides methods to compute fuel consumption based on usage. - The ContainPG class specifies properties associated with ![Figure 1 - Typical Cougaar LDM Asset and Property Group Classes](image-url) things which contain cargo. Its attributes include maxLength, maxWidth, maxHeight, maxVolume, maxMass, and maxPassengers. - The ElectricGenerationBG class specifies the functional capabilities of generators, and provides methods to determine specific capabilities as functions of the load and environment. - The MaintenancePG class holds the maintenance log and planned maintenance schedule for a specific asset, not a prototype. - The ReliabilityPG specifies the functional reliability of an asset type, and the preventive maintenance required to achieve that reliability. - The RepairProcedurePG holds a set of maintenance procedures for an asset. It specifies the procedures and resources (parts, repair equipment, mechanical skills, time) will be required to perform repairs of the asset. This is an example of a specialized PropertyGroup that only a particular agent type would require. Figure 2 shows instances representing two Asset prototypes (an M978 Fuel Truck [7] and an MEP-208A generator [6]) which are instances of the Truck and Generator classes, and a set of instances of PropertyGroup classes which represent the properties of these prototypes. The object instances which describe the M978 Fuel Truck and MEP-208A Generator prototypes are: - The M978 Truck Prototype is an instance of the Truck class. Its attributes refer to Property Group instances which have values which describe the properties of the M978 Truck prototype. These include: M978-TypeIdentificationPG, M978-PhysicalPG, M978-FuelTypeConsumptionBG, M978-ContainerPG, and M978-ReliabilityPG. - The MEP-208A-GeneratorPrototype is an instance of the Generator class. Its attributes refer to Property Group instances which have values which describe the properties of the MEP-208A Generator prototype. These include: MEP-208A-TypeIdentificationPG, MEP-208A-PhysicalPG, MEP-208A-ElectricGenerationBG, and MEP-208A-FuelConsumptionBG, as well as MEP-208A-TransportabilityPG, and MEP-208A-ReliabilityPG Figure 3 shows instances representing several individual M978 trucks and MEP-208A generators. Each truck and generator instance has three key attributes: - A reference to its prototype. All the M978 trucks share the same M978 Truck Prototype, and all the MEP-208A generators share the same MEP-208A Generator Prototype. - A reference to an ItemIdentification Property Group. Each asset has its own ItemIdentification Property Group. - A reference to a MaintenancePG which is specific to the individual asset, and holds the maintenance log and planned maintenance schedule for the asset. In addition, the MEP-208A Generator, identified by S/N A2709B, has its own ElectricGenerationBG which specifies its unique, de-rated electric generation capabilities. 7. EXAMPLES OF ASSETS WITH MULTI-RESOLUTIONAL REPRESENTATIONS Consider the interactions between several logistics agents and the different asset representations required by each of them. Figure 4 shows four logistics agents, and their varied views of two assets: - POL-TRKCO represents a fuel transportation company. It owns a number of M978 Fuel Trucks, including Truck-789. Figure 3 - Asset Instances Representing Several Individual M978 Trucks and MEP-208A Generators - PRIMEPWR-ENGCO represents an engineering company which provides emergency electricity with its portable electric generators. It owns a number of MEP-208A Generators, including .Gen-A2709B. - MAINTCO represents a fictitious equipment maintenance company which repairs a variety of equipment including trucks and electric generators. - MSC represents Military Sealift Command which is responsible for shipping military cargo worldwide. In this example, POL-TRKCO and PRIMEPWR-ENGCO send tasks to MAINTCO and MSC requesting them, respectively, to perform maintenance on their equipment, and to transport their equipment to a disaster site overseas. As owners and managers of their equipment, POL-TRUCKCO and PRIMEPWR-ENGCO each create fully populated assets and prototypes for all their equipment. Figure 4 shows their fully populated instances of Truck-780 and Generator-A2709B. However, when references to these assets are transmitted to MSC and MAINTCO, these agents construct differently populated assets and prototypes for them. Beyond the ubiquitous ItemIdentification and TypeIdentification PGs, MSC requires only the PhysicalPGs and the TransportabilityPGs. Similarly, MAINTCO requires the MaintenancePGs and ReliabilityPGs. However, MAINTCO is required to add the RepairProcedurePG for all reparable assets, and it may also require the ElectricGenerationBGs for the generator. In this way, each agent which reasons about an asset, needs to represent only those aspects of the assets with which it is concerned. 8. IMPLEMENTATION The Cougaar LDM is implemented by means of a set of core components and services available to the components (generally plugins) of the Cougaar agents, as well as a set of domain-specific components which must be implemented for domain-specific applications. The core components include: - Factory objects which create LDM objects. - A UIDServer which supports generation of globally-unique keys for distributed objects. - An asset prototype cache for sharing commonly-used prototypical asset references. - A mechanism for requesting that LDM Plugins provide Properties for a new asset. The domain specific components generally comprise: - Prototype provider plugins to create asset prototypes - Property and behavior provider plugins which create property and behavior groups and attach them to asset prototypes. Instances of the class Asset may represent prototypical objects (e.g., “Model MEP-208A Electric Generator Set”) or an actual, identifiable instance of such a prototype (e.g., “Model MEP-208A Electric Generator Set” serial number “A2709B”). Identifiable assets usually delegate most of their properties to a shared prototypical asset of the right type. To facilitate the sharing of such assets, the LDM allows Plugins to “cache” any prototypes which they create so that others can find them later. There is a method on both the RootFactory and on the LDM, getPrototype() which first checks the prototype asset cache for the appropriate asset, and then invokes PrototypeProvider LDMPPlugins until one is able to supply the right one. The LDM also has a set of methods which allow Plugins to create assets (especially Prototypes) without knowing all the properties which apply. These methods allow the creator of an asset to request that all PropertyProvider LDMPPlugins be called to fill in whatever PropertyGroups they can. The construction of an asset is generally initiated by either a notification that an agent has a new asset of its own to manage, or by receiving a reference to a new asset (typically as part of a task from another agent). In either case, the usual construction sequence of an asset is something like: - A plugin (Plugin1) of an agent detects the reference to the new asset and decides it needs to create an “actual” asset of type T, because it must reason in some way about the asset - Plugin1 asks the LDM to create a prototype asset of type T. - The LDM looks in the prototype cache and fails to find anything matching T. - The LDM invokes a prototype provider plugin (Plugin2). - Plugin2 constructs a new asset prototype (Proto1) of the right class. • Plugin2 asks the LDM to create and populate the properties of Proto1. • The LDM invokes one or more property provider and behavior provider plugins (Plugin3 and Plugin4). • Plugin3 adds a property group to the asset prototype (proto1) and returns. • Plugin4 adds a behavior group to the asset prototype (proto1) and returns. • Plugin2 asks the LDM to cache the Proto1. • Plugin1 asks the RootFactory to create an instance of Proto1. • RootFactory creates an instance of the asset (Asset1) which delegates to Proto1. • Plugin1 adds Asset1 to the agent blackboard. Thus when an agent receives a reference to a new asset type, the particular property groups and behavior groups it attaches to the asset prototype depend on the particular property provider and behavior provider plugins with which it is provisioned. By this means, each agent has control of its own view of the assets about which it reasons: assets which it owns and manages, assets for which it provides services, and assets which provide services to it. A more detailed description of how the implementation of the Cougaar LDM interacts with the other components of the Cougaar architecture (agents, plugins, blackboard, and other components) is provided in the Cougaar Architecture Document (BBNT-2003a). Detailed usage patterns for implementing assets in the Cougaar LDM are described in detail in the Cougaar Developers’ Guide (BBNT-2003b). 9. CONCLUSION By using techniques of prototypes and delegation, we have developed a logical data model (the Cougaar LDM) for logistics that effectively factors an otherwise unwieldy representation problem. This factoring provides a basis for multi-resolutional representations of the entities in a large, agent-based logistics system; the information about any entity at each logistics agent can be limited to only that subset in which the agent has interest. 10. ACKNOWLEDGEMENTS This work is sponsored by the US Defense Advanced Research Agency (DARPA) and is managed under DARPA’s Joint Logistics Technology Office (JLTO). 11. REFERENCES
{"Source-Url": "http://cougaar.org/doc/papers/2003/Kimas03Berliner-1.pdf", "len_cl100k_base": 5403, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 48757, "total-output-tokens": 6334, "length": "2e12", "weborganizer": {"__label__adult": 0.00089263916015625, "__label__art_design": 0.0007624626159667969, "__label__crime_law": 0.0011911392211914062, "__label__education_jobs": 0.00156402587890625, "__label__entertainment": 0.00016379356384277344, "__label__fashion_beauty": 0.0003817081451416016, "__label__finance_business": 0.0010595321655273438, "__label__food_dining": 0.0006580352783203125, "__label__games": 0.001870155334472656, "__label__hardware": 0.0034275054931640625, "__label__health": 0.0008120536804199219, "__label__history": 0.0008020401000976562, "__label__home_hobbies": 0.0002149343490600586, "__label__industrial": 0.0036869049072265625, "__label__literature": 0.0005350112915039062, "__label__politics": 0.0007843971252441406, "__label__religion": 0.0008311271667480469, "__label__science_tech": 0.1690673828125, "__label__social_life": 0.0001710653305053711, "__label__software": 0.0243682861328125, "__label__software_dev": 0.751953125, "__label__sports_fitness": 0.0007071495056152344, "__label__transportation": 0.03350830078125, "__label__travel": 0.0005750656127929688}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29097, 0.02098]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29097, 0.6769]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29097, 0.92394]], "google_gemma-3-12b-it_contains_pii": [[0, 4222, false], [4222, 9391, null], [9391, 14897, null], [14897, 18336, null], [18336, 21460, null], [21460, 24031, null], [24031, 25698, null], [25698, 29097, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4222, true], [4222, 9391, null], [9391, 14897, null], [14897, 18336, null], [18336, 21460, null], [21460, 24031, null], [24031, 25698, null], [25698, 29097, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29097, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29097, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29097, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29097, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29097, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29097, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29097, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29097, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29097, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29097, null]], "pdf_page_numbers": [[0, 4222, 1], [4222, 9391, 2], [9391, 14897, 3], [14897, 18336, 4], [18336, 21460, 5], [21460, 24031, 6], [24031, 25698, 7], [25698, 29097, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29097, 0.0]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
f799e376d812830dda288ec4c8ac3b72606c0854
The Security Twin Peaks Conference Item How to cite: For guidance on citations see FAQs © 2011 Springer-Verlag Berlin Heidelberg Version: Accepted Manuscript Link(s) to article on publisher's website: http://dx.doi.org/doi:10.1007/978-3-642-19125-1_3 Copyright and Moral Rights for the articles on this site are retained by the individual authors and/or other copyright owners. For more information on Open Research Online’s data policy on reuse of materials please consult the policies page. oro.open.ac.uk The Security Twin Peaks Thomas Heyman\textsuperscript{1}, Koen Yskout\textsuperscript{1}, Riccardo Scandariato\textsuperscript{1}, Holger Schmidt\textsuperscript{2}, and Yijun Yu\textsuperscript{3} \textsuperscript{1} IBBT-DistriNet, Katholieke Universiteit Leuven, Belgium \texttt{first.last@cs.kuleuven.be} \textsuperscript{2} Technische Universität Dortmund, Germany \texttt{holger.schmidt@cs.tu-dortmund.de} \textsuperscript{3} Open University, United Kingdom \texttt{y.yu@open.ac.uk} Abstract. The feedback from architectural decisions to the elaboration of requirements is an established concept in the software engineering community. However, pinpointing the nature of this feedback in a precise way is a largely open problem. Often, the feedback is generically characterized as additional qualities that might be affected by an architect’s choice. This paper provides a practical perspective on this problem by leveraging architectural security patterns. The contribution of this paper is the Security Twin Peaks model, which serves as an operational framework to co-develop security in the requirements and the architectural artifacts. Keywords: security, software architecture, requirements, patterns. 1 Introduction Often, the requirements specification is regarded as an independent activity with respect to the rest of the software engineering process. In fact, both literature and practice have pointed out that requirements cannot be specified in isolation and “thrown over the wall” to the designers and implementers of the system. In contrast, the requirements specification (describing the problem) and the architectural design (shaping a solution) are carried on concurrently and iteratively, while still maintaining the separation between the problem and solution space. This process of co-developing the requirements and the software architecture is referred to as the Twin Peaks model [22]. As depicted in Figure 1, the specification process (i.e., refinement) in the Twin Peaks model continuously jumps back and forth between the requirements and architectural peaks, in order to embrace the decisions made in the other peak. Some work already exists that focuses on the forward transition from security requirements to software architectures [18,11,31,26]. This work leverages standardized solutions, such as security patterns. These solutions are related to the security requirements via traceability links, facilitating both the selection of the right architectural solutions and documentation of the rationale for the architectural choice [29]. This, in turn, facilitates impact analysis in face of change. Concerning the backward transition, even if the importance of the feedback from the architecture to the requirements is an established concept in the software engineering community, the literature fails in pinpointing the nature of this feedback in a precise and operational way. This is also true for software qualities such as security. Often, the feedback is generically characterized as additional qualities, such as performance, that might be affected by a security architectural choice [28]. This paper presents an elaboration of the original Twin Peaks model in the context of security, called the Security Twin Peaks. By leveraging architectural security patterns, the model provides constructive insights in the process of specifying and designing a security-aware system, by pinpointing interaction points between the software architect’s and the requirements engineer’s perspective. In particular, we illustrate that an architectural security pattern actually consists of three elements that are key with regard to the Twin Peaks: (1) components supporting the security requirement by fulfilling a security functionality, (2) roles (connecting the generic solution to the specific architecture) and the expectations on such roles, and (3) residual goals. As our main contribution, we show how these elements are related to the requirements specification and can be leveraged to drive the refinement process, thereby substantiating the Security Twin Peaks model. KAOS is used to represent (security) requirements [27]. However, the presented model is not specific to the chosen requirements engineering methodology. The rest of this paper is organized as follows. The related work is presented in Section 2. Architectural security patterns are analyzed in Section 3, in order to identify the root causes of feedback from the architectural design to the requirements specification. The Security Twin Peaks model is introduced and discussed in Section 4. Finally, Section 5 presents the concluding remarks. 2 Related Work The problem peak in secure software engineering. In the realm of security software engineering, Haley et al. [10,9] present a framework for representing security requirements in an application context and for both formal and informal argumentation about whether a system satisfies them. The proposed argumentation process specifies several iterative steps for the problem part of the Twin Peaks. Mouratidis et al. [19,18,14] present a procedure to translate the Secure Tropos models [8] into UMLsec diagrams [16]. However, they do not provide explicit feedback from the chosen architectural solution back to the requirements phase. The solution peak in secure software engineering. Côté et al. [5] propose a software development method using problem frames for requirements analysis and using architectural patterns for the design. For the benefit of evolving systems, evolution operators are proposed to guide a pattern-based transformation procedure, including re-engineering tasks for adjusting a given software architecture to meet new system demands. Through application of these operators, relations between analysis and design documents are explored systematically for accomplishing desired software modifications, which allows for reusing development documents to a large extent, even when the application environment and the requirements change. In parallel, Hall et al. [12] propose the A-struct pattern in problem frames to explore the relationship between requirements and architectures in a problem-oriented software engineering methodology. Both work deals with feedback for general software engineering problems, but they had not focused on specific difficulties in secure software development. Security patterns. Many authors have advanced the field of security design patterns during the last years [30,17,3,24,25,6]. A comprehensive overview and a comparison of the different existing security design patterns can be found in [13], which establishes, among others, that the quality of the documentation of some existing security design patterns is questionable. A recent survey by Bandara et al. [1] compares various software engineering methods in application to address a concrete RBAC security pattern to reveal that there is still a need to systematically support the Security Twin Peaks by linking security requirements with security architectures. To shed more lights on the mechanisms for Secure Twin Peaks, another extensive survey on security requirements for evolving systems [21] categories the literature in terms of how an evolving system is related to its evolving requirements and changing contexts. Fernandez et al. [7] propose a methodology to systematically use security design patterns in UML activity diagrams to identify threats to the system and to nullify these threats during fine-grained design. Mouratidis et al. [20] present an approach to make use of security design patterns that connects these patterns to the results generated by the Secure Tropos methodology [8]. 3 Architectural Security Patterns Revisited Patterns are a well-known and recognized technique to build software architectures. This section revisits architectural security patterns and highlights the key elements that are used in Section 4 as stepping stones to link the solution domain (architectural peak) to the problem domain (requirements peak). To illustrate the concepts, the Authentication Enforcer [25] pattern is used as an example. This pattern describes how to solve the problem of authenticating users in a systematic way by creating an authentication layer, and provides a number of different implementation alternatives to realize this layer in the architecture. One of the alternatives is the provider-based authentication method depicted in Figure 2. The solution consists of an Authentication Enforcer component that mediates all access requests originated by Clients and delegates the implementation of an authentication method to a third-party Authentication Provider component. 3.1 Key Notions for Co-development Besides the many pieces of information that are traditionally documented in a pattern (e.g., the problem description and the known uses), we observe that an architectural security pattern can be seen, at its core, as a combination of three parts. They are: 1) Components and behavioral requirements. The participants of a pattern can be grouped into components that are newly introduced, and roles (further discussed in point 2) referring to components that are external to the pattern. A new component has a security-specific purpose, i.e., it adds new functionality to the system that is specific to a security requirement the system should uphold. This corresponds to the operationalization of a secondary functional requirement, as in Haley et al. [9]. Hence, new components introduce (finer-grained) behavioral requirements, which they fulfil and to which they can be linked, as clarified in Section 4. Example. In the example, the Authentication Enforcer pattern introduces an Authentication Enforcer component, which encapsulates the authentication logic. This component is only needed to address the security problem statement. 2) Roles and expectations. Patterns are generic solutions that need to be instantiated in the context of concrete (possibly partial) architectures. Roles are used for that purpose. A role is a reference that needs to be mapped to a component (or sub-system) that is already present in the existing architecture. Hence, the roles provide the connective between the new components and the existing components, and define how both should interact. Often, a pattern introduces expectations specific to its roles that need to be fulfilled by the concrete architecture. The pattern can impose constraints on both the way an external component is supposed to play a given role, as well as the way the external component interacts via the connectors with the rest of the pattern internals. Hence, roles introduce finer-grained requirements in the problem domain. Example. The Authentication Enforcer pattern introduces two roles: the Client, which is mapped to the actual component that invokes the Authentication Enforcer component, and the Authentication Provider, which needs to be mapped to a third-party system providing an authentication mechanism. One expectation that should be realized by the Authentication Provider role is that the result of the authentication process is passed back as a Principal object. The pattern specifies the responsibilities but does not dictate how the Authentication Provider should be implemented (e.g., it does not specify the authentication mechanism). The pattern also imposes certain expectations on the interaction between the Authentication Enforcer and the Client. It suggests to protect the confidentiality of credentials, especially during transit. For instance, in a web context, the pattern suggests to avoid clear-text communication. 3) Residual goals. These are security considerations to take into account when instantiating the pattern. For instance, the pattern might make (trust) assumptions on the environment in which the system is deployed, that fall outside the scope of the solution presented by the pattern. These residual goals are not under the responsibility of either the newly introduced components or the roles. Example. One residual goal of the Authentication Enforcer pattern is to localize all authentication logic in the Authentication Enforcer component. Realizing this goal is out of scope of the pattern itself—it is impossible for the Authentication Enforcer pattern to enforce, in some way, that no other component contains custom authentication code. A residual goal externalizes this concern, placing the responsibility back in the hands of the software architect. Another residual goal is that it should be impossible for an attacker to obtain the user's credentials. This manifests itself in residual requirements such as "make credentials hard to forge" (e.g., implement a strong password policy) and "ensure that credentials do not leak" (e.g., store salted hashes locally, do not store the passwords in plain text). As a final note, although this section focuses on architectural security patterns, the three parts presented above can be identified in any generic security architectural solution, irrespective from whether it is described as a pattern or not. 3.2 Revisiting the Pattern Documentation The above three parts have been implicitly mentioned (often in a scattered way) in the literature, e.g., in the documentation of existing architectural patterns [4]. These patterns are usually documented by means of the following information [3]. The problem and forces describe the context from which the pattern emerged. The solution describes how the pattern resolves the competing forces and solves the problem. This solution consists of two parts. The structure of the solution depicts the different participants that play a role in the pattern, and the relationships among them. The behavior of the solution describes the collaborations among the different participants, by which they realize their common goals and solve the problem. Apart from the solution, a pattern should document its consequences, that highlight both the strengths and weaknesses of the proposed solution. Finally, an example of the pattern in an easily understood software setting shows how this is applied in practise. The three elements from the previous section are often present in a general pattern description. The participants from the solution description correspond to both the newly introduced components and roles. The behavior of the solution introduces the behavioral requirements on the new components of the pattern, and possibly also expectations on the roles. The consequences of the pattern (and in particular the weaknesses) identify potential residual goals. This clearly shows that the three key notions for co-development are not so abstract and are, in most cases, already implicitly documented in existing patterns. This work contributes to the subject by bringing these three parts to the front and illustrating the primary role they play in the context of the Security Twin Peaks. 4 The Security Twin Peaks In the previous section, we discussed the fundamental parts comprising an architectural security pattern. In this section, these concepts are leveraged to outline a constructive process for co-developing secure software architectures and security requirements. Particular focus is placed on the feedback loop between architecture and requirements, and the more subtle intricacies that need to be taken into account. This process is not a new development process or paradigm by itself. Rather, it gives constructive guidance on what is mostly left implicit, i.e., how to interleave the requirements and architectural peaks when designing a secure software system using security patterns or other generic security solutions. Hopefully, the awareness of this process contributes to the effectiveness of requirements engineering and software architecture design. For the solution peak, we apply an attribute-driven architectural design approach, such as Bass et al. (ADD, [2]). In attribute-driven design, non-functional concerns such as performance, maintainability, security, and so on, are referred to as quality attributes, which are orthogonal to the functionality expected of the system and drive the design of the software architecture. Qualities are realized through fundamental design decisions, referred to as tactics (a.k.a. solution strategies). An architectural pattern (or style) is a domain-independent solution to a recurring problem, which packages and operationalizes a number of tactics. For the problem peak, we apply a goal-based requirements engineering approach, such as the one by van Lamsweerde (KAOS, [27]). In goal-based methods, goals (prescriptive statements of intent that the system should satisfy) are used for requirements elicitation, analysis and elaboration. Agents (active system components playing a specific role in goal satisfaction) achieve these goals through cooperation. We illustrate the process on a simple application. Consider an online shop which allows customers to buy products over the Internet. Customers have an account to which costs are billed. For billing purposes, an important goal of the system is that all purchases are securely traced back to the customers. ### 4.1 Overview The process is sketched in Figure 3. Table 1 complements the figure by explaining the meaning of the labels. The process progresses through 8 activities grouped in 2 phases. These phases are repeated over several iterations until the requirements specification and the architectural design are deemed as complete (i.e., detailed enough). In KAOS terminology, the goal decomposition stops once the leaf goals are realizable by software agents, i.e., a solution is selected and instantiated. In Phase I (activities 1-4), a tactic is selected to realize a goal. In Phase II (activities 5-9) a pattern is selected and instantiated. For each activity, a graphical representation is given of the current peak (with a filled triangle), and the transition between the peaks (with an arrow) where applicable. After a bird’s-eye view on the whole process, each activity is described in more detail. **Act. 1.** Select an initial security goal that will be refined in this iteration of the process. In the example, goal $g_1$ (‘all purchases are securely traced’) is selected. Obviously, $g_1$ would be part of a larger goal tree that is not shown here. **Act. 2.** Choose and assess a solution tactic for the goal. In the example, a prevention tactic is chosen: users shall be authenticated before purchase orders can be placed ($t_1$). The architect (together with other Table 1. The Security Twin Peaks: example <table> <thead> <tr> <th>Label</th> <th>Type</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>g₁</td> <td>Goal</td> <td>All purchases are securely traced</td> </tr> <tr> <td>t₁</td> <td>Tactic</td> <td>User shall be authenticated before purchasing</td> </tr> <tr> <td>g₃</td> <td>Goal</td> <td>An identified user shall be authenticated</td> </tr> <tr> <td>g₅</td> <td>Goal</td> <td>Mediate requests and delegate authentication</td> </tr> <tr> <td>e₆</td> <td>Expectation</td> <td>Return a Principal object</td> </tr> <tr> <td>e₇</td> <td>Expectation</td> <td>Avoid clear-text communication</td> </tr> <tr> <td>g₈</td> <td>Residual goal</td> <td>Ensure that credentials do not leak</td> </tr> <tr> <td>c₁</td> <td>Component</td> <td>Authentication Enforcer</td> </tr> <tr> <td>r₁</td> <td>Role</td> <td>Client</td> </tr> <tr> <td>r₂</td> <td>Role</td> <td>Authentication Provider</td> </tr> </tbody> </table> stakeholders), must decide whether, for example, the assurance gained from authentication outweighs the decrease in usability that goes together with enforcing authentication. Act. 3. ▲ △ ▲ Refine the goal based on the chosen tactic. In the example, this leads to the introduction of sub-goals g₂ (‘users shall be identified’), g₃ (‘an identified user shall be authenticated’) and g₄ (‘only authenticated users can purchase products’). While manual refinement by an expert is possible, a problem decomposition pattern could be associated to the tactic. The decomposition pattern can provide extra guarantees of soundness, e.g., by ensuring that the set of sub-goals is indeed complete. Act. 4. ▲ △ ▲ Check for conflicts between the newly introduced and the previous goals. Resolve conflicts where possible. If a conflict cannot be satisfactorily resolved, backtrack to Activity 2 to select a different solution. For instance, the identification goal g₂ may conflict with an anonymity goal elsewhere in the goal tree, aimed at protecting the customer’s privacy. The stakeholders can, for example, weaken the anonymity goal so that customers can still anonymously browse the shop, but anonymity is no longer required when an actual purchase is made. Act. 5. ▲ △ △ Select a sub-goal introduced in the previous step (e.g., g₃) that has to be resolved using an architectural security pattern. Act. 6. ▲ △ ▲ Choose an architectural security pattern whose problem statement matches the selected sub-goal. In the example, the Authentication Enforcer pattern is chosen to resolve goal g₃. For instance, this can be a solution pattern that is linked to the used problem decomposition pattern. If no suitable solution is found, backtrack to activity 2 to select a different tactic. Act. 7. ▲ △ ▲ Instantiate the architectural security pattern by performing the following activities: (a) Instantiate the newly introduced components from the pattern in the architecture. In the example, an Authentication Enforcer component (c₁) is added to the architecture. (b) Connect the new components to the rest of the architecture by binding the roles to concrete architectural elements. If no suitable architectural elements are present already in the architecture, create them. In the example, the Client role \( r_1 \) is mapped to the existing component that handles purchases, and the Authentication Provider role \( r_2 \) is mapped to a to-be-introduced component, responsible for checking the customer’s credentials. If the instantiation of the pattern becomes infeasible, backtrack to activity 6 to select a different pattern. **Act. 8.** \( \Delta \) Update the requirements model so that it corresponds to the new architecture, by performing the following activities. (a) Introduce new requirements that describe the functionality of the newly introduced components. For instance, \( g_5 \) (‘mediate requests and delegate authentication’). (b) Introduce the expectations that need to be achieved by the elements that play one of the pattern’s roles. For instance, \( e_6 \) (‘return a Principal object’) is an expectation for role \( r_2 \) and \( e_7 \) (‘avoid clear-text communication’) is an expectation on the connector between \( c_1 \) and \( r_2 \). Note that this list of expectations is not complete for the given example. (c) Add the residual goals described by the pattern to the goal tree. For instance, only \( g_8 \) (‘ensure that credentials do not leak’) is shown in the example. **Act. 9.** \( \Delta \) Check for conflicts between the newly introduced and the previous goals. Resolve conflicts where possible. If a conflict cannot be satisfactorily resolved, backtrack to activity 6 to select a different solution. **4.2 Discussion** The previous process description should be complemented with the following considerations. **Activity 1.** The security goal that is selected in this activity can originate from known requirements engineering techniques, which we do not consider further in this work. Also, the order in which goals are selected for refinement is not fixed, and should be decided by the requirements engineer and the other stakeholders. It should be noted, however, that additional security goals can be introduced by Activity 8 later in the process. These goals should also be considered for selection in the next iteration. **Activity 2.** In choosing the solution tactic, the focus is on determining a suitable tactic to guide the goal decomposition, possibly led by a catalog of tactics. For instance, security can be handled by preventing attacks (e.g., authenticate users), detecting attacks (e.g., intrusion detection) or recovering from an attack (e.g., using audit trails) [2]. While not having a direct manifestation in the primary architectural artifacts (at this stage), choosing a tactic does involve the architectural peak, as potential architectural constraints need to be taken into account. This is why the architect should assess whether the current architecture is able to support the considered tactic. Also, care must be taken to choose a tactic that does not conflict with the important qualities of the existing architecture. Note that it is still uncertain whether it is feasible to fulfill the goal using the chosen tactic. This only becomes apparent after a solution has been chosen and instantiated in Activity 7. To determine the suitability of a tactic, various factors need to be taken into account and a risk assessment should be performed to decide whether the potential losses outweigh the implementation costs. For instance, the tactic can be too costly or even impossible to implement, e.g., a tactic may require full mediation, which is not supported by the current architectural environment. Finally, notice that the selection of a tactic is an important architectural decision that belongs to the body of architectural knowledge. As shown in Figure 3, the tactic can be linked to the pattern that will be selected later on. In this respect, the tactic documents the rationale that will lead to the selection of a certain pattern and complements the rationale represented by linking the pattern and the goal it realizes. In the online shop example, the prevention tactic is chosen: a user will be asked to authenticate before the shopping process continues, ensuring that the identity of the user is known before the billing procedure starts. An alternative tactic (while not as straightforward in the e-commerce context) would be detection and recovery: send the invoice to the address the user entered, without performing rigorous authentication first. If the bill is paid, the item gets shipped. Otherwise, the order is canceled. **Activity 3.** Performing a goal decomposition based on a tactic represents the completion of a first round-trip between the problem peak and the solution peak. Note that the influence of architectural decisions on goal decomposition is mentioned in KAOS as well. In particular, in KAOS, a goal can be decomposed into several alternative branches and it is acknowledged that the selection of an alternative leads to a different architecture. In KAOS, the selection of an alternative is driven by soft goals (i.e., system qualities, development goals or architectural constraints) [26]. **Activity 4 (and 9).** In some cases, the new goals (resulting from a decomposition or introduced by a pattern instantiation) will fit naturally in the existing goal tree. However, conflicts may emerge and, hence, there is a need to explicitly incorporate conflict resolution in the secure development process. Requirements engineering methodologies such as KAOS already define techniques to resolve conflicts (e.g., avoidance, restoration, anticipation or weakening) [27]. If the conflict cannot be sufficiently resolved, however, backtracking and selecting a different tactic or pattern can be considered. Of course, it can also be decided that the currently selected pattern remains in place, and the other (conflicting) part of the system is revisited. **Activity 5.** The selection of a sub-goal initiates the second part of the process, where a concrete solution is chosen and instantiated. Like in Activity 1, the order of selection is left to the insight of the requirements engineer and the other stakeholders, which may mandate certain priorities. Activity 6. The selection of the architectural pattern defines a traceability link, connecting the selected sub-goal and the pattern, i.e., the goal provides the rationale for the pattern. As mentioned before, this explicit relationship enriches the architectural knowledge. Note also that, sometimes, a pattern may be able to solve a collection of goals simultaneously. This leads to more complex traceability links, but the outlined process can still be used. Conversely, an architectural pattern may not be a complete match for the selected sub-goal, and can only fulfill a part of it. In this case, the goal can be additionally refined, such that one of its children match the pattern, or, alternatively, the initial refinement can be adapted. Activity 7. By instantiating new components and connecting these to existing components, the software architecture gets refined. This refinement comes in two flavors. A pattern can extend an architecture with new components, while largely leaving the existing system untouched, as is the case with the Authentication Enforcer. As a second category of refinements, a pattern can substitute one or more components with a more refined subsystem. An example of such a pattern is the Secure Message Router [2], which can replace an existing message broker. Of course, when instantiating the pattern, established software engineering practices need to be applied. For instance, related functionality should be grouped together. This also implies that the solution should be merged with existing components where possible. For instance, corresponding roles from different patterns can be mapped to the same component. It can be expected that new components pose no difficulties to their introduction in the system, because they are independent from the context. Concerning the role bindings, however, more problems can arise. In some cases, it can be straightforward to select an existing component to fulfill a role, or to extend an existing component with new responsibilities. In general, however, the expectations imposed on a role might fundamentally conflict with other parts of the system. For example, consider a pattern dictating that communication between components should be encrypted. If the element to which the role gets bound to requires that all its connections are plain-text to support auditing, then this clearly triggers a non-trivial conflict that prevents the pattern from being instantiated correctly. In general, it can be observed that conflicting expectations are the root cause of conflict among patterns, which are informally documented in pattern catalogs. Similarly, pattern languages (such as [30]) contain a cohesive set of patterns resolving each others residual goals as much as possible, while not introducing conflicts. Activity 8. There are three distinct types of feedback, arising from the three parts of an architectural security pattern described in the Section 3. Each type of feedback introduces new elements in the requirements model. The first type of feedback is the addition of new requirements assigned to the newly introduced components from the security pattern. These requirements describe the functionality of the new components and are necessary to ensure that all behavior implemented in the system is traceable to some requirement. The second type of feedback is the set of expectations (i.e., constraints) imposed on the elements that play one of the roles from the pattern. It is then up to the architect to iterate over the architecture and assess whether the expectations (1) are already met by the component and the connectors involved in the corresponding role (e.g., it might be that the web server hosting the shop already supports SSL in order to securely transmit user credentials) or (2) require a refinement of these architectural elements so that they meet the expectations. The third type of feedback is the set of residual goals. These goals are not assigned to a concrete element, because it is unspecifed (from the pattern perspective) which element is responsible for them, or even how to achieve them. Therefore, they serve as candidate initial goals for refinement in a next iteration of the process. Obviously, it could be the case that the residual goals are already met by some sub-system of the architecture as is. In summary, the unbounded responsibility associated to residual goals distinguishes them from the previous type of feedback. The goals generated by the feedback need to be reconciled with the goal tree. All goals introduced in this activity are prescribed by the pattern, and are necessary to ensure its correct functioning. As the pattern itself was selected to achieve the sub-goal selected in Activity 5 (e.g., $g_3$ in the example), it is natural to expect that the feedback goals need to be inserted as children of that sub-goal. 5 Conclusion This paper has presented an elaboration of the Twin Peaks model, specific to co-developing secure software architectures and security requirements using patterns. The precise interaction points between architectural design and requirements engineering (in the context of software security) have been identified by decomposing the instantiation of an architectural security pattern into the intertwined process of (a) introducing new components and binding existing components to roles, and (b) introducing security behavioral requirements, expectations and residual goals. By pinpointing these interaction points, it is easier for the security-minded requirements engineer and software architect to predict where feedback might arise during the development process, and identify its root cause. Furthermore, this model can be leveraged to build more robust secure software engineering methods. We plan to evaluate the secure twin peaks by applying it to other security requirements engineering methods such as SEPP [23], which is based on Jackson’s problem frames [15]. Also, we would like to validate our approach in the context of industrial case studies. We believe that the explicit documentation of traceability links between architectural design and requirements analysis artifacts helps to (1) systematically evolve software systems, and (2) increase the applicability of security patterns in practice. Acknowledgements. This research is partially funded by the Interuniversity Attraction Poles Programme Belgian State, Belgian Science Policy, and by the Research Fund K.U. Leuven. References
{"Source-Url": "http://oro.open.ac.uk/28408/1/security_twin_peaks.pdf", "len_cl100k_base": 6788, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 34923, "total-output-tokens": 9086, "length": "2e12", "weborganizer": {"__label__adult": 0.0004048347473144531, "__label__art_design": 0.0005221366882324219, "__label__crime_law": 0.000514984130859375, "__label__education_jobs": 0.0007166862487792969, "__label__entertainment": 5.6624412536621094e-05, "__label__fashion_beauty": 0.00016164779663085938, "__label__finance_business": 0.0002474784851074219, "__label__food_dining": 0.0003249645233154297, "__label__games": 0.0005822181701660156, "__label__hardware": 0.0005345344543457031, "__label__health": 0.0004062652587890625, "__label__history": 0.00020110607147216797, "__label__home_hobbies": 6.604194641113281e-05, "__label__industrial": 0.0003504753112792969, "__label__literature": 0.00028634071350097656, "__label__politics": 0.00027871131896972656, "__label__religion": 0.00040602684020996094, "__label__science_tech": 0.011962890625, "__label__social_life": 7.88569450378418e-05, "__label__software": 0.004150390625, "__label__software_dev": 0.97705078125, "__label__sports_fitness": 0.00029921531677246094, "__label__transportation": 0.00041031837463378906, "__label__travel": 0.00017821788787841797}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40690, 0.02486]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40690, 0.67252]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40690, 0.90269]], "google_gemma-3-12b-it_contains_pii": [[0, 739, false], [739, 3380, null], [3380, 5576, null], [5576, 8784, null], [8784, 11056, null], [11056, 14368, null], [14368, 17624, null], [17624, 19312, null], [19312, 22380, null], [22380, 25283, null], [25283, 28558, null], [28558, 31894, null], [31894, 35044, null], [35044, 38204, null], [38204, 40690, null]], "google_gemma-3-12b-it_is_public_document": [[0, 739, true], [739, 3380, null], [3380, 5576, null], [5576, 8784, null], [8784, 11056, null], [11056, 14368, null], [14368, 17624, null], [17624, 19312, null], [19312, 22380, null], [22380, 25283, null], [25283, 28558, null], [28558, 31894, null], [31894, 35044, null], [35044, 38204, null], [38204, 40690, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40690, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40690, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40690, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40690, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40690, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40690, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40690, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40690, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40690, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40690, null]], "pdf_page_numbers": [[0, 739, 1], [739, 3380, 2], [3380, 5576, 3], [5576, 8784, 4], [8784, 11056, 5], [11056, 14368, 6], [14368, 17624, 7], [17624, 19312, 8], [19312, 22380, 9], [22380, 25283, 10], [25283, 28558, 11], [28558, 31894, 12], [31894, 35044, 13], [35044, 38204, 14], [38204, 40690, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40690, 0.08392]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
efb820ebcbbb19d48a132ed25d451f2a6b0c12f6
Oz—A Programming Language for Multi-Agent Systems* Martin Henz, Gert Smolka, Jörg Würtz German Research Center for Artificial Intelligence (DFKI) Stuhlsatzenhausweg 3 D-66000 Saarbrücken Germany E-mail: {henz, smolka, wuertz}@dfki.uni-heimdalen.de Abstract Oz is an experimental higher-order concurrent constraint programming system under development at DFKI. It combines ideas from logic and concurrent programming in a simple yet expressive language. From logic programming Oz inherits logic variables and logic data structures, which provide for a programming style where partial information about the values of variables is imposed concurrently and incrementally. A novel feature of Oz is that it accommodates higher-order programming without sacrificing that denotation and equality of variables are captured by first-order logic. Another new feature of Oz is constraint communication, a new form of asynchronous communication exploiting logic variables. Constraint communication avoids the problems of stream communication, the conventional communication mechanism employed in concurrent logic programming. Constraint communication can be seen as providing a minimal form of state fully compatible with logic data structures. Based on constraint communication and higher-order programming, Oz readily supports a variety of object-oriented programming styles including multiple inheritance. 1 Introduction Oz is an attempt to create a high-level concurrent programming language bringing together the merits of logic and object-oriented programming in a unified language. Our natural starting point was concurrent constraint programming [Saraswat and Rinard, 1990], which brings together ideas from constraint and concurrent logic programming. Constraint logic programming [Jaffar and Lassez, 1987, Colmerauer and Benhamou, 1993], on the one hand, originated with Prolog II [Colmerauer et al., 1983] and was prompted by the need to integrate numbers and data structures in an operationally efficient, yet logically sound manner. Concurrent logic programming [Shapiro, 1989], on the other hand, originated with the Relational Language [Clark and Glynn, 1981] and was prompted by the Japanese Fifth Generation Project, where logic programming was conceived as the basic system programming language and thus had to account for concurrency, synchronization and indeterminism. For this purpose, the conventional SLD-resolution scheme had to be replaced with a new computation model based on the notion of committed choice. At first, the new model developed as an ad hoc construction, but finally Maher [Maher, 1987] realized that commitment of agents can be captured logically as constraint entailment. A major landmark in the new field of concurrent constraint programming is AIL [Janson and Narlikar, 1991], the first implemented concurrent constraint language accommodating search and deep guards. Saraswat’s concurrent constraint model [Saraswat and Rinard, 1990] can accommodate object-oriented programming along the lines of Shapiro’s stream-based model for Concurrent Prolog [Shapiro and Takenchi, 1983]. However, this model is intolerably low-level due to the clumsiness of stream communication and the lack of higher-order programming facilities. This becomes fully apparent when the model is extended to provide for inheritance [Goldberg et al., 1992]. Thus the two essential innovations Oz has to provide to be well-suited for object-oriented programming are better communication and a facility for higher-order programming. Both innovations require stepping outside of established semantical foundations. The semantics of Oz is thus specified by a new mathematical model, called the Oz Calculus, whose technical set-up was inspired by the π-calculus [Milner, 1991], a recent foundationally motivated model of concurrency. The way Oz provides for higher-order programming is unique in that denotation and equality of variables are captured by first-order logic only. In fact, denotation of variables and the facility for higher-order programming are completely orthogonal concepts in Oz. This is in contrast to existing approaches to higher-order logic programming [Naderahi and Miller, 1988, Chen et al., 1993]. Constraint communication is asynchronous and indeterministic. A communication event replaces two complementary communication tokens with an equation linking the partners of the communication. Constraint communication introduces a minimal form of state that --- *This work has been supported by the Bundesminister für Forschung und Technologie, contract ITW-9105. is fully compatible with logic data structures. Efficient implementation of fair constraint communication is straightforward. The paper is organized as follows. The next section outlines a simplified version of the Oz Calculus. Section 3 shows how Oz accommodates records as a logic data structure. The remaining sections present one possible style of concurrent object-oriented programming featuring multiple inheritance. 2 The Oz Calculus The operational semantics of Oz is defined by a mathematical model called the Oz Calculus [Smolla, 1993]. In this section we outline a simplified version sufficing for the purposes of this paper. The basic notion of Oz is that of a computation space. A computation space consists of a number of agents connected to a blackboard (see Fig. 1). Each agent reads the blackboard and reduces once the blackboard contains the information it is waiting for. The information on the blackboard increases monotonically. When an agent reduces, it may put new information on the blackboard and create new agents. Agents themselves may have one or several local computation spaces. Hence the entire computation system is a tree-like structure of computation spaces (see Fig. 1). The agents of a computation space are agents at the micro-level. They are used to program agents at the macro-level. One interesting form of macro-agents are the objects we will introduce in a later section of this paper. Formally, a computation state is an expression according to Fig. 2. (if ξ is a syntactic category, ξ denotes a possibly empty sequence ξ1 ... ξn) Constraints, abstractions and communication tokens reside on the blackboard. Applications and conditionals are agents. Composition and quantification are the glue assembling agents and blackboard items into a computation space. Quantification introduces local variables. Abstractions may be seen as procedure definitions and applications as procedure calls. The clauses of a conditional are unordered. Their guards, i.e. σ in if σ then τ, constitute local computation spaces. Note that any expression can be taken as a guard; one speaks of a flat guard if the guard is a constraint. There are two variable binders: quantification Σxσ binds x with scope σ and abstraction x?y/σ binds the variables in y with scope σ. Free variables of an expression are defined accordingly. \[ x, y, z : \text{variables} \\ \sigma, \tau, \mu ::= \\ \phi, \psi : \text{constraint} \\ x, y/\sigma : \text{abstraction} \\ x!y, x?y : \text{put, get token} \\ x?y : \text{application} \\ \text{if } \omega_1 \ldots \omega_n \text{ else } \sigma : \text{conditional} \\ \sigma \land \tau : \text{composition} \\ \exists x \sigma : \text{quantification} \\ \omega ::= \text{if } \sigma \text{ then } \tau : \text{clause} \\ \phi, \psi ::= \bot | \top | s \triangleq t | r(\delta) | \phi \land \psi \] Figure 2: Expressions of the Oz Calculus Computation is defined as reduction (i.e., rewriting) of expressions. A reduction step is performed by applying a reduction rule to a subexpression satisfying the application conditions of the rule. There is no backtracking. Control is provided by the provision that reduction rules must not be applied to mute subexpressions, i.e., subexpressions that occur within bodies of clauses, else parts of conditionals, or bodies of abstractions. It is up to the implementation which non-mute subexpression is rewritten with which applicable rule. Reduction "σ → τ" is defined modulo structural congruence "σ ≡ τ" of expressions, that is, satisfies the inference rule \[ \sigma \equiv \sigma' \quad \sigma \rightarrow \sigma' \quad \tau' \equiv \tau' \\ \sigma \rightarrow \tau \] Structural congruence is an abstract equality for computation states turning them from purely syntactic objects into semantical objects. Structural congruence provides for associativity and commutativity of composition, re-naming of bound variables, quantifier mobility \[ \exists x \sigma \land \tau \equiv \exists x (\sigma \land \tau) \quad \text{if } x \text{ does not occur free in } \tau, \] constraint simplification, and information propagation from global blackboards to local blackboards. 2.1 Constraints Constraints (ϕ, ψ in Fig. 2) are formulas of first-order predicate logic providing for data structures. Logical conjunction of constraints coincides with composition of expressions. Constraints express partial information about the values of variables. The semantics of constraints is defined logically by a first-order theory Δ and imposed with the congruence law \[ \phi \equiv \psi \quad \text{if } \Delta \models \phi \leftrightarrow \psi \] This law closes the blackboard under entailed constraints (since \(\Delta \models \phi \rightarrow \psi\) iff \(\Delta \models \phi \rightarrow \phi \land \psi\)). The congruence The Annihilation Law \[ \exists \pi (\phi \land y : \pi) \equiv \top \] if \( \Delta \models \exists \pi \phi \) and \( y \in L(\pi, \phi) \), where \[ L(\pi, \phi) := \{ y \in \pi \mid \forall z : \phi \models y = z \rightarrow z \in \pi \} \] provides for the deletion of quantified constraints and abstractions not affecting visible variables. 2.2 Application An application agent \( x \pi \eta \) waits until an abstraction for its link \( x \) appears on the blackboard and then reduces as follows: \[ x \pi \eta \land x \exists y / \sigma \rightarrow (x \exists y / \sigma \land x \exists y / \sigma) \land x \exists y / \sigma \] if \( \pi \) and \( \eta \) are disjoint and of equal length. Note that the blackboard \( y : \exists y / \sigma \land x \exists y \) contains an abstraction for \( x \) due to the congruence laws stated above. Since the link \( x \) of an abstraction \( x \exists y / \sigma \) is a variable like any other, abstractions can easily express higher-order procedures. Note that an abstraction \( x \exists y / \sigma \) does not impose any constraints (e.g., equalities) on its link \( x \). 2.3 Constraint Communication The semantics of the two communication tokens is defined by the Communication Rule: \[ x ! y \land z ? y \rightarrow x \exists z. \] Application of this rule amounts to an indeterministic transition of the blackboard replacing two complementary communication tokens with an equality constraint. The Communication Rule is the only rule deleting items from the blackboard. Since agents read only constraints and abstractions, the information visible to agents nevertheless increases monotonically. 2.4 Conditional It remains to explain the semantics of a conditional agent \[ \text{if } \exists \pi_1 (\sigma_1 \text{ then } \tau_1) \cdots \exists \pi_n (\sigma_n \text{ then } \tau_n) \text{ else } \mu. \] The guards \( \sigma_i \) of the clauses are local computation spaces reducing concurrently. For the local computations to be meaningful it is essential that information from global blackboards is visible on local blackboards. This is achieved with the Propagation Law (recall that the clauses are unordered): \[ \pi \land \text{if } \exists \pi (\sigma \text{ then } \tau) \text{ else } \mu \equiv \pi \land \text{if } \exists \pi (\sigma \land \tau) \text{ else } \mu \] if \( \pi \) is a constraint or abstraction and no variable in \( \pi \) appears free in \( \pi \). Read from left to right, the law provides for copying information from global blackboards to local blackboards. Read from right to left, the law provides for deletion of local information that is present globally. An example verified by employing the Propagation Law in both directions (as well as constraint simplification) is \[ x \exists 1 \land \text{if } (x \exists 1 \text{ then } \sigma) \text{ (x \exists 2 \text{ then } \tau) else } \mu \equiv x \exists 1 \land \text{if } (\top \text{ then } \sigma) \text{ (\bot \text{ then } \tau) else } \mu. \] The example shows that the constraint theory entails that 1 and 2 are different. Operationally, the constraint simplification and propagation laws can be realized with a so-called relative simplification procedure. Relative simplification for the constraint system underlying Oz is investigated in [Smolka and Treinen, 1992]. There are two distinguished forms a guard of a clause may eventually reduce to, called satisfied and failed. If a guard of a clause is satisfied, the conditional can reduce by committing to this clause: \[ \text{if } \exists \pi (\sigma \text{ then } \tau) \text{ else } \mu \rightarrow \exists \pi (\sigma \land \tau) \text{ if } \exists \pi \sigma \equiv \top. \] Reduction puts the guard on the global blackboard and replenishes the body of the clause. A guard is failed if the constraints on its blackboard are unsatisfiable. If the guard of a clause is failed, the clause is simply discarded: \[ \text{if } \exists \pi (\bot \land \sigma \text{ then } \tau) \text{ else } \mu \rightarrow \text{if } \exists \pi \sigma \text{ else } \mu. \] Thus a conditional may end up with no clauses at all, in which case it reduces to its else part: \[ \text{if } \exists \pi \sigma \text{ else } \mu \rightarrow \mu. \] The reduction \[ x \exists 1 \land \text{if } (x \exists 1 \text{ then } \sigma) \text{ (x \exists 2 \text{ then } \tau) else } \mu \rightarrow x \exists 1 \land \sigma \] is an example for the application of the first rule, and \[ x \exists 3 \land \text{if } (x \exists 1 \text{ then } \sigma) \text{ (x \exists 2 \text{ then } \tau) else } \mu \rightarrow^* x \exists 3 \land \mu \] is an example employing the other two reduction rules. 2.5 Logical Semantics The subcalculus obtained by disallowing communication tokens and conditionals with more than one clause enjoys a logical semantics by translating expressions into formulas of first-order predicate logic as follows (composition is interpreted as conjunction, and quantification is interpreted as existential quantification): \[ x \exists y / \sigma \rightarrow \forall y (\text{apply}(x y) \leftrightarrow \sigma) \] \[ x \pi \eta \rightarrow \text{apply}(x \pi \eta) \] \[ \text{if } \exists \pi (\sigma \text{ then } \tau) \text{ else } \mu \rightarrow \exists \pi (\sigma \land \tau) \lor (\neg \exists \pi \sigma \land \mu). \] Under this translation, reduction is an equivalence transformation, that is, if \( \sigma \rightarrow \tau \) or \( \sigma \equiv \tau \), then \( \Delta \models \sigma \leftrightarrow \tau \). Moreover, negation can be expressed since \( \neg \sigma \) is equivalent to if \( \sigma \) then \( \bot \) else \( \top \). 2.6 Unique Names A problem closely related to equality and of great importance for concurrent programming is the dynamic creation of new and unique names. Roughly, one would like a construct \( \text{gensym}(x) \) such that \[ \text{gensym}(x) \land \text{gensym}(y) \] is congruent to a constraint entailing \( \neg (x=y) \). For this purpose we assume that there are infinitely many distinguished constant symbols called names such that the constraint theory \( \Delta \) satisfies: 1. \( \Delta \models \neg (a=x) \) for every two distinct names \( a, b \) 2. \( \Delta \models S \equiv S[a/b] \) for every logical sentence \( S \) and every two names \( a, b \) (\( S[a/b] \) is obtained from \( S \) by replacing every occurrence of \( b \) with \( a \)). Now \( \text{gensym}(x) \) is modeled as a generalized quantification \( \exists a(x=a) \), where the quantified name \( a \) is subject to \( a \)-renaming. With that and the quantifier mobility stated above we in fact obtain a constraint in which \( x \) and \( y \) are different: \[ \exists a(x=a) \land \exists y(y=a) \equiv \exists a(x=a) \land \exists b(y=b) \] \[ \equiv \exists a \exists b(x=a \land y=b). \] 3. Records The constraint system underlying Oz provides a domain that is closed under record construction [Smolka and Treiman, 1992]. We now outline its constraint theory as far as is needed for the rest of this paper. We will be very liberal as it comes to syntax. The reader may consult [Smolka and Treiman, 1992] for details. Records are obtained with respect to an alphabet of constant symbols, called atoms, and denoted by \( a, b, f, g \). Records are constructed and decomposed by constraints of the form \[ x \equiv f(a_1, x_1 \ldots x_n) \] where \( f \) is the label, \( a_1, \ldots, a_n \) are the field names, and \( x_1, \ldots, x_n \) are the corresponding values of record \( x \). The order of the fields \( a_j; x_j \) is not significant. The semantics of the above constraint is fixed by two axioms schemes \[ f(\bar{a}; \bar{x}) \equiv f(\bar{a}; \bar{y}) \leftrightarrow \bar{x} \equiv \bar{y} \] \[ f(\bar{a}; \bar{x}) \equiv g(\bar{b}; \bar{y}) \land \bot \quad \text{if} f \neq g \text{ or } [\bar{a}] \neq [\bar{b}] \] where \( [\bar{x}] \) is the set of elements of the sequence \( \bar{x} \). Field selection \( x.y \) is a partial function on records defined by the axioms schemes \[ f(\bar{a}; \bar{x}.y), b \equiv y \] \[ f(\bar{a}; \bar{x}).b \equiv y \land \bot \quad \text{if} b \not\in [\bar{y}]. \] The function \( \text{label}(x) \) is defined on records by the scheme \[ \text{label}(f(\cdot)) \equiv f. \] Finally, record adjunction \( \text{adjunct}(x,y,z) \) is defined by the schemes: \[ \text{adjunct}(f(\bar{a}; \bar{x}.y), b, z) \equiv f(\bar{a}; \bar{x}.b;z) \] \[ \text{adjunct}(f(\bar{a}; \bar{x}), b, z) \equiv f(\bar{a}; \bar{x}.b;z) \quad \text{if} b \not\in [\bar{x}]. \] We write \( f(x_1 \ldots x_n) \) as a short hand for \( f(1; x_1 \ldots n; x_n) \). Thus we obtain Prolog terms as a special case of records. 4 Synchronous Communication Constraint communication is asynchronous. The following program shows how synchronous communication can be expressed using constraint communication. Computation only proceeds after communication has taken place (signaled by an acknowledgement). \[ \text{proc} \{ \text{Producer} \} \] \[ \text{exists} \text{ Ack in} \] \[ \text{item}('\text{yellow brick! Ack 1}') \text{ Channel} \] \[ \text{if \ Ack = 1 \ then \ \{ Producer \ \} fi} \] \end \[ \text{proc} \{ \text{Consumer} \} \] \[ \text{exists} \text{ X Ack in} \] \[ \text{item}('\text{X Ack}') \text{ Channel} \] \[ \text{if \ Ack = 1 \ then \ \{ AddToRoad X \} \ \{ Consumer \ \} fi} \] \end We have now switched to the concrete syntax of Oz: \( \text{pred} \{ x \bar{y} \} \sigma \) \text{ end} stands for \( x \bar{y}/\sigma \land \exists a(x=a), \{ x \bar{y} \} \) \text{ for } \bar{y}, \text{ and juxtaposition for composition}. Moreover, nesting is allowed and is eliminated by conjunction and quantification e.g. \text{item}('\text{X Ack}') \text{ Channel expands to exists Y in Y=\text{item}('\text{X Ack}') \text{ Channel}\ Y \text{ Channel}\ Y \text{ Channel}\ Y \text{ Channel}\ Y \text{ Channel}\ Y \text{ Channel}. Finally, the default for a missing else part of a conditional is else true. 5 Objects An object has a static aspect, its method table, and a dynamic aspect, its state. Methods are functions \[ \text{method} : \text{state} \times \text{message} \to \text{state}. \] A method table is a mapping from method names to methods, represented as a record whose field names act as method names. A message is a record whose label is the name of the method and whose fields are arguments. It turns out that we can represent an object \( O \) by the procedure that sends the message. This representation gives a unique identity to the object since \[ \text{proc} \{ x \bar{y} \} \sigma \text{ end} \text{ stands for } x \bar{y}/\sigma \land \exists a(x=a). \] \[ \text{proc} \{ \text{O Message} \} \] \[ \text{if MethodName Method in} \] \[ \text{MethodName} = \{ \text{label} \text{ Message} \} \] \[ \text{Method} = \text{MethodTable}.\text{MethodName} \] \[ \text{then exists State in} \] \[ \text{State} ? C \] \[ \text{if \ \{ \text{label State} \} = \text{state} \} \] \[ \text{then \ \{ Method State Message \} \ ! C \ \text{fi} \] \end \end Observe that nested application makes programs more concise: \[ \text{exists } NState \in \{\text{Method State Message} \mid NState \} \quad NState ! C \] When a message is received by the object \( O \), the method associated with the method name is retrieved using the method table of the object (i.e., late binding). Then the state of the object is replaced by the state obtained by applying the method. The following procedure provides a generic scheme for creating objects from a method table and an initial message: \[ \text{proc } \{\text{Create IMessage MethodTable} O\} \begin{align*} &\quad \text{exists } I\text{Method } C \in \{\text{Method state}(\text{self}:O) \mid \text{IMessage}\} &\quad \{\text{Method state}(\text{self}:O) \mid \text{IMessage}\} ! C &\quad \text{proc } \{\text{O Message} \} \ldots \text{ end} \end{align*} \] Observe that the notion of "self" is provided in a natural way by starting with the initial state \( \text{state}(\text{self}:O) \). Object initialization is provided by applying an initial message to that state. The resulting state is written on the blackboard. Now, the object is ready to receive messages. We abbreviate message sending of the form \( \{O M\} \) by \( O ^{M} \). Note that quantification of the communication link \( C \) hides the state and provides for data encapsulation. 6 Methods Assume that we want to model a counter as an object. First, we fix the methods to be stored in the method table. To initialize the counter we use the method \[ \text{proc } \{\text{Init Ins X Outs}\} \begin{align*} &\quad \text{if } Y \in X = \text{init}(Y) &\quad \quad \text{then } Outs = \{\text{adjjoinAt InS val Y} \} \text{ fi} \end{align*} \] Observe that \( \text{init} \) will add the attribute \( \text{val} \) if it is not present in the state \( \text{InS} \) (see the semantics of \( \text{adjjoinAt} \) in Section 3). To ease the treatment of the state and to get a more elegant notation we abbreviate this abstraction by \[ \text{meth } \{\text{Init init}(Y)\} \quad \text{val } \leftarrow Y \text{ end} \] Incrementing and retrieving is achieved by \[ \text{proc } \{\text{Inc Ins X Outs}\} \begin{align*} &\quad \text{if } X = \text{inc} &\quad \quad \text{then } Outs = \{\text{adjjoinAt InS val Ins.val} + 1\} \text{ fi} \end{align*} \] \[ \text{meth } \{\text{Inc inc}\} \quad \text{val } \leftarrow @\text{val} + 1 \text{ end} \] which is abbreviated to \[ \text{meth } \{\text{Get get}(Y)\} \quad Y = @\text{val} \text{ end} \] A counter is created by \[ \text{MT } = \text{mt(init:Init inc:Inc get:Get)} \] \[ \begin{align*} &\quad \{\text{Create init}(0) \text{ MT Counter}\} \end{align*} \] 7 Inheritance In our framework, inheritance amounts to using the method tables of other objects to build the method table of a new object. We modify the procedure \( \text{Create} \) to provide for inheritance. \[ \text{proc } \{\text{Create Ancestors IMessage} \begin{align*} &\quad \text{NewMethods } O &\quad \exists \text{Method Name } I\text{Method } C &\quad \{\text{AllMethods send in} &\quad \text{AllMethods } = &\quad \{\text{AdjjoinAll Ancestors NewMethods}\} &\quad O = \text{object}(\text{methods:AllMethods sendSend}) &\quad \text{proc } \{\text{Send Message} \} \ldots \text{ end} \end{align*} \] The procedure \( \text{AdjjoinAll} \) (not shown) adjoins the method tables of \( \text{Ancestors} \) and \( \text{NewMethods} \) from left to right: For any method name, the rightmost method definition is taken (cf. \( \text{adjjoinAt} \) in Section 3). To make the methods of objects accessible, an object is now represented as a record containing the methods and the send procedure. Therefore, message sending changes slightly: \( \text{Counter} X \text{ inc} \) stands now for \( \{\text{Counter send inc}\} \). A counter that is displayed in a window (the object \( \text{VisibleObject} \) is defined in Section 9) and that can additionally decrement its value can be created by \[ \text{meth } \{\text{Dec dec}\} \quad \text{val } \leftarrow @\text{val} - 1 \text{ end} \] \[ \begin{align*} &\quad \{\text{Create CounterVisibleObject nil init}(0) \text{ mt}(\text{dec}\text{:Dec})\} \end{align*} \] for which we introduce the following syntactic sugar. \[ \text{create DecCounter} \begin{align*} &\quad \text{from Counter VisibleObject} &\quad \text{with init} &\quad \text{meth dec val } \leftarrow @\text{val} - 1 \text{ end} \end{align*} \] 8 Method Application Some languages providing for inheritance support the concept of super to address methods overwritten due to the inheritance priority. Or provides a more general scheme in that an object can apply to its state methods of any other object (regardless of inheritance). Assume an already defined object \( \text{Rectangle} \). A square can inherit from a rectangle but needs for initialization only its length but not its width. create Square from Rectangle with init(10) method init(X) (Rectangle.methods).init init init(X X) end ... end where the method expands to proc {Init InS X OutS} if Y in X = init(Y) then OutS = {Rectangle.methods.init InS init(Y Y) } fi end Note that (self.methods m) differs from @self m in that the former transforms the local state immediately, whereas other messages can be taken before the latter is eventually executed. 9 Meta Object Protocol Now, we modify the object system such that the essenti- als of object creation and message sending can be in- herited, providing the object-system with a meta object protocol like in [Kiczales et al., 1991] for CLOS. The new definition of Create uses the meta-method create to describe the object's behavior. proc {Create Ancestors Message NewMethods O} exists AllMethods in AllMethods = {Adjoin All Ancestors NewMethods} {AllMethods create create (AllMethods Message O) .} end The underscore "_" denotes an anonymous variable oc- curring only once. Like an organism, an object can inherit the way it and its heirs are created, and the basic structure how it communicates with its environment. We can further modularize the object protocol such that, e.g., each method call is performed by a call to the meta-method methodCall. Assume that the meta- methods create and methodCall are defined in the object MetaObject. In this case, a VisibleObject that sends a message containing its current state to a Display whenever it executes a method, can be created as follows: create VisibleObject from MetaObject method methodCall(InS Meth Mess OutS) {Meth InS Mess OutS} Display ~ show(OutS) end Acknowledgements We thank all members of the Programming Systems Lab at DFKI for countless fruitful discussions on all kinds of subjects and objects; particularly many suggestions came from Michael Mehl and Ralf Scheidtmaner. References Hilog: A foundation for higher-order logic programming. of the ACM Conference on Functional Programming [Colmerauer and Benhamou, 1993] A. Colmerauer and F. Benhamou, editors. Constraint Logic Programming: Se- [Colmerauer et al., 1983] A. Colmerauer, H. Kanoui, and M. Van Careghem. Prolog, theoretical principles and current trends. Technology and Science of Informatics, Shapiro. Logic programs with inheritance. FGCS, pages Constraint logic programming. In Proceedings of the An- nual ACM Symposium on Principles of Programming Programming paradigms of the Andorra kernel language. In Logic Programming, Proceedings of the 1991 Interna- Kahn, 1989] K.M. Kahn. Objects: A fresh look. In Pro- cedings of the European Conference on Object Oriented Maher, 1987] M. J. Maher. Logic semantics for a class of committed-choice programs. In Logic Programming, Proceedings of the Fourth International Conference, pages al. ECS-LFCS Report Series 91-180, University of Edin- Nadathur and Miller, 1988] G. Nadathur and D. Miller. An overview of λProlog. In Logic Programming: Proceedings of the Fifth International Conference and Symposium, Vol- Concurrent constraint programming. In Proceedings of the 7th Annual ACM Symposium on Principles of Program- Shapiro and Takeuchi, 1983] E. Shapiro and A. Take- uchi. Object oriented programming in Concurrent Prolog. Shapiro, 1988] E. Shapiro. The family of concurrent logic programming languages. ACM Computing Surveys, Joint International Conference and Symposium on Logic Smolka, 1993] G. Smolka. A calculus for higher-order con- current constraint programming. Research report, DFKI,
{"Source-Url": "http://www.comp.nus.edu.sg/~henz/publications/pdf/IJCAI93.pdf", "len_cl100k_base": 7619, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 32718, "total-output-tokens": 8955, "length": "2e12", "weborganizer": {"__label__adult": 0.0003135204315185547, "__label__art_design": 0.00029969215393066406, "__label__crime_law": 0.00032520294189453125, "__label__education_jobs": 0.0005660057067871094, "__label__entertainment": 6.777048110961914e-05, "__label__fashion_beauty": 0.0001264810562133789, "__label__finance_business": 0.0002200603485107422, "__label__food_dining": 0.0003459453582763672, "__label__games": 0.00044846534729003906, "__label__hardware": 0.0006957054138183594, "__label__health": 0.0004215240478515625, "__label__history": 0.00020241737365722656, "__label__home_hobbies": 9.590387344360352e-05, "__label__industrial": 0.00047659873962402344, "__label__literature": 0.00026607513427734375, "__label__politics": 0.00026345252990722656, "__label__religion": 0.0004451274871826172, "__label__science_tech": 0.0247802734375, "__label__social_life": 8.916854858398438e-05, "__label__software": 0.005474090576171875, "__label__software_dev": 0.962890625, "__label__sports_fitness": 0.0002732276916503906, "__label__transportation": 0.0005321502685546875, "__label__travel": 0.0001741647720336914}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30188, 0.02392]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30188, 0.38089]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30188, 0.80069]], "google_gemma-3-12b-it_contains_pii": [[0, 4598, false], [4598, 9424, null], [9424, 14791, null], [14791, 20531, null], [20531, 25436, null], [25436, 30188, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4598, true], [4598, 9424, null], [9424, 14791, null], [14791, 20531, null], [20531, 25436, null], [25436, 30188, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30188, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30188, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30188, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30188, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30188, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30188, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30188, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30188, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30188, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30188, null]], "pdf_page_numbers": [[0, 4598, 1], [4598, 9424, 2], [9424, 14791, 3], [14791, 20531, 4], [20531, 25436, 5], [25436, 30188, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30188, 0.0]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
f4f24c03ef9584b61e3b1d6e099e2490ef1aa023
THE NUTRITION ADVISOR EXPERT SYSTEM Scott M. Huse and Scott S. Shyne Rome Laboratory, Computer Systems Rome Laboratory, Communication Technology Abstract. The Nutrition Advisor Expert System (NAES) is an expert system written in the C Language Integrated Production System (CLIPS). NAES provides expert knowledge and guidance into the complex world of nutrition management by capturing the knowledge of an expert and placing it at the user’s fingertips. Specifically, NAES enables the user to: (1) obtain precise nutrition information for food items, (2) perform nutritional analysis of meal(s), flagging deficiencies based upon the United States Recommended Daily Allowances, (3) predict possible ailments based upon observed nutritional deficiency trends, (4) obtain a top-ten listing of food items for a given nutrient, and (5) conveniently upgrade the database. An explanation facility for the ailment prediction feature is also provided to document the reasoning process. INTRODUCTION The Nutrition Advisor Expert System (NAES) is an expert system written in the C Language Integrated Production System (CLIPS). The purpose of NAES is to emulate human expertise in the complex problem domain of nutrition analysis. NAES is a user-friendly, practical expert system with real application potential, e.g., nursing homes, hospitals, doctor’s offices, or home use. NAES allows even a novice user to: (a) quickly and easily obtain nutrition information on food item(s); (b) perform nutritional analysis on meal(s); (c) predict potential ailments based upon nutritional deficiency trends; (d) generate a top-ten listing of food items for a given nutrient; (e) conveniently upgrade the database of nutrition knowledge. NAES consists of a working memory of facts and rules in a knowledge base. The user can easily enter new food item facts into the database or delete existing facts. Consequently, the system can improve its performance with time. The system also features an explanation facility for the ailment prediction module. This important facility provides documentation detailing the reasoning process. In the development of this system, several artificial intelligence/expert system (AI/ES) techniques were utilized to increase system efficiency and reliability, e.g., fuzzy sets, certainty factors, uncertain evidence, modular design, and efficiency techniques in coding design. The methods by which some of these techniques were implemented in NAES is discussed in detail within the ‘System Features’ section of this paper. An expert system should be characterized by good performance, adequate response time, good reliability, understandability, and flexibility (Giarratano and Riley 1989). NAES possesses each of these important qualities. It is capable of performing quick expert nutritional analysis with a high degree of reliability and understandability. System flexibility is provided by a convenient and efficient mechanism for modifying the database of nutrition knowledge. INSTALLATION GUIDE NAES was written using CLIPS software, version 4.2 for the IBM PC. The software is available from the authors on a 5 1/4 inch diskette. In order to run the program, CLIPS must first be installed on your computer. If it is not already installed on your hard drive, copy the CLIPS diskette(s) into a sub-directory named ‘CLIPS’. Run CLIPS by typing, ‘CLIPS’. Once CLIPS has been installed on your computer, you are ready to run the Nutrition Advisor Expert System. Insert the NAES program diskette into drive A and from the CLIPS directory on your hard drive type, ‘copy a:nut.’ <RETURN>. This will copy the program into the CLIPS directory of your hard drive. Next, type ‘CLIPS’ <RETURN>. This will bring up the CLIPS environment. NAES can then be loaded into the system by typing ‘load “nut”’ <RETURN>. The program will load in about forty seconds, depending of course on your computer system. When the CLIPS’ prompt returns, the system is loaded and ready to run. Run it by typing ‘(reset)’ <RETURN>, and then ‘(run)’ <RETURN>. The (reset) command initializes the system and the (run) command starts program execution. A main menu should appear with the following six options: (1) Food Item Information (2) Dietary Analysis of Meal(s) (3) Ailment Prediction Based on Nutritional Deficiencies (4) Top-Ten (5) Database Update (6) Exit To select an option simply enter the menu number followed by the <RETURN> key. SYSTEM FEATURES This section provides a detailed discussion of each of the features offered by the system. There are six menu options. A step-by-step explanation of how to use each menu option follows along with a discussion of implementation details. The first menu option is ‘Food Item Information’. Select this option if you are interested in obtaining nutrition information for a given food item. When you select this option you will be prompted to enter the name of the food item. Enter the food item and press <RETURN>. The food item name entered by the user is simply matched against the food item name field in the list of facts. The template used for food items is: (item <food name> <calories> <calcium> <iodine> <iron> <magnesium> <phosphorus> <sodium> <vitamin A> <vitamin B1> <vitamin B2> <vitamin B6> <vitamin B12> <folic acid> <niacin> <vitamin C> <vitamin D> <vitamin E>) If a match is found, the corresponding nutritional multi-field values for a standard serving are extracted by the system and displayed on the screen. If, on the other hand, the food item is not present in the database, the user is informed accordingly and instructed to use menu option five. 'Database Update', to update the database. Table 1 shows a sample run of this menu option. <table> <thead> <tr> <th>Enter Food Item: beef</th> </tr> </thead> <tbody> <tr> <td>Calories</td> </tr> <tr> <td>Calcium</td> </tr> <tr> <td>Iodine</td> </tr> <tr> <td>Iron</td> </tr> <tr> <td>Magnesium</td> </tr> <tr> <td>Phosphorus</td> </tr> <tr> <td>Sodium</td> </tr> <tr> <td>Vitamin A</td> </tr> <tr> <td>Vitamin B1</td> </tr> <tr> <td>Vitamin B2</td> </tr> <tr> <td>Vitamin B6</td> </tr> <tr> <td>Vitamin B12</td> </tr> <tr> <td>Folic Acid</td> </tr> <tr> <td>Niacin</td> </tr> <tr> <td>Vitamin C</td> </tr> <tr> <td>Vitamin D</td> </tr> <tr> <td>Vitamin E</td> </tr> </tbody> </table> Table 1. A run of 'Food Item Information' The second menu option is 'Dietary Analysis of Meal(s)'. Select this option if you are interested in obtaining nutrition information for a meal or meals. When you select this option you will first be prompted to supply some personal data, i.e., name, age, and gender. If the gender is female and the age is greater than ten, the system will also inquire if you are pregnant. There are fifteen different categories that are utilized based upon sex and age. This information is necessary to determine the appropriate caloric and United States Recommended Daily Allowances (USRDA) guidelines. Once this information is provided, you will be prompted to enter the name and relative quantities of food items that you either have consumed or plan to consume. This information is categorized on two levels. The first categorization that takes place references the time of day that consumption occurs (breakfast, lunch, or dinner). The second categorization that occurs relates to the nutritional food group that the food item belongs to (meat, dairy, fruit and vegetable, or bread and cereal). If you wish to skip an entry, simply press <RETURN>. Once you have entered all of the necessary information, the system will quickly analyze the nutritional content of your meal(s) and display any nutritional deficiencies based upon your specific USRDA requirements. The implementation of this module is more complex than that of the first menu option. Each entered food item is asserted into the database in the fact form: (dayfood <food name> <quantity> <breakfast,lunch,dinner>). The <quantity> field in this fact represents the fuzzy sets: quantity = (very small, small, medium, large, very large) Each member of the fuzzy set represents a factor designed to alter the food's relative nutritional content based upon serving size. The translation formulas for each member of the fuzzy set are shown in Table 2. <table> <thead> <tr> <th>Serving Size</th> <th>Nutrient Content</th> <th>Formula</th> </tr> </thead> <tbody> <tr> <td>very small</td> <td>&lt;nutrient content&gt;</td> <td>0.50</td> </tr> <tr> <td>small</td> <td>&lt;nutrient content&gt;</td> <td>0.75</td> </tr> <tr> <td>medium</td> <td>&lt;nutrient content&gt;</td> <td>1.00</td> </tr> <tr> <td>large</td> <td>&lt;nutrient content&gt;</td> <td>1.25</td> </tr> <tr> <td>very large</td> <td>&lt;nutrient content&gt;</td> <td>1.50</td> </tr> </tbody> </table> Table 2. Fuzzy set formulas Once all of the 'dayfood' items have been entered, the 'dayfood' facts are matched against the database of known foods. All of the calories, vitamins, and minerals of the entered foods are totaled and used to generate a fact containing the individual's total nutritional intake for that day. This 'total' nutritional fact is then asserted into the database in the form: (total <calories> <calcium> <iodine> <iron> <magnesium> <phosphorus> <sodium> <vitamin A> <vitamin B1> <vitamin B2> <vitamin B6> <vitamin B12> <folic acid> <niacin> <vitamin C> <vitamin D> <vitamin E>). If a 'dayfood' fact does not match an already existing food item in the database, NAES will ask the user if the food item should be asserted into the permanent database of food items. This particular feature of the expert system is very important because it achieves a flexible methodology whereby the ability to 'learn' and improve performance is realized. Once the new food is asserted, it becomes part of the system's working knowledge base. The nutritional content of the new food item is also added to the 'total' food intake fact so that the complete dietary intake for that day can be used to correlate the nutritional deficiencies with the individual's daily eating habits. Next, the system determines the person's specific USRDA nutritional group by matching personal data against the database. The system will place the user in one of fifteen different categories based upon age and sex. The nutritional content of the entered food items is compared with the USRDA guidelines for that specific person. The system then generates another fact called 'deficiencies'. It is of the form: (deficiencies <calories> <calcium> <iodine> <iron> <magnesium> <phosphorus> <sodium> <vitamin A> <vitamin B1> <vitamin B2> <vitamin B6> <vitamin B12> <folic acid> <niacin> <vitamin C> <vitamin D> <vitamin E>). The system then informs the user of any deficiencies in his or her diet citing the calculated deficiency percentages. The deficient nutrient(s) and their respective deficiencies are then asserted into the database in the form: (def <nutrient> <amount deficient>). This information is important because it can answer important health questions such as: (a) Am I consuming too many calories? (b) Do I consume appropriate amounts of all the essential nutrients? (c) What nutrients are lacking in my normal diet? Table 3 shows a sample run of menu option three, 'Dietary Analysis of Meal(s), by an individual with dietary habits that are less than exemplary. Please enter your name: Scott Please enter your age: 24 Please enter your gender (m/f): m Quantities are: [very small small medium large very large] Enter breakfast meat group Enter breakfast dairy product Enter breakfast fruit and vegetable group Enter breakfast bread and cereal group Enter lunch meat group Enter lunch dairy product Enter lunch fruit and vegetable group Enter lunch bread and cereal group Enter dinner meat group: beef Enter Quantity of "beef" small Enter dinner dairy product Enter dinner fruit and vegetable group: peas Enter Quantity of "peas" very small Enter dinner bread and cereal group The following is a list of your USRDA Deficiencies (%): - Calories: 55.97% - Calcium: 94.13% - Iodine: 100.00% - Magnesium: 68.5% - Phosphorus: 35.19% - Sodium: 93.79% - Vitamin A: 91.28% - Vitamin B1: 74.38% - Vitamin B2: 52.00% - Vitamin B6: 25.00% - Vitamin B12: 100.00% - Folic Acid: 86.87% - Niacin: 9.84% - Vitamin C: 85.55% - Vitamin D: 100.00% - Vitamin E: 97.00% Table 3. A run of 'Dietary Analysis of Meal(s)' These deficiencies are utilized by the third major component of this system, the 'Ailment Prediction Based on Nutritional Deficiencies' menu option. Select this option if you are interested in speculating about possible ailments that you may incur based upon the continuation of your observed dietary habits. The knowledge base for this predictive analysis option was derived from the 'Nutrition Almanac' (Kirschmann 1975). This reference notes the fact that nutritional Authorities have linked deficiencies in one or more nutrients to the appearance of a number of diseases. Fortunately, most diseases caused by such deficiencies can be corrected when all essential nutrients are supplied. This option can only be executed if you have previously run menu option two, 'Dietary Analysis of Meal(s)'. If you attempt to run menu option three, 'Ailment Prediction Based on Nutritional Deficiencies', without first running 'Dietary Analysis of Meal(s)', you will be directed to first run 'Dietary Analysis of Meal(s)'. Assuming that you have previously run 'Dietary Analysis of Meal(s)', selecting menu option three, 'Ailment Prediction Based on Nutritional Deficiencies', will automatically generate a list of zero or more possible ailments that you may incur if you continue to maintain such dietary habits. Using the dietary information entered in menu option two, and selecting menu option three we discover that this particular individual runs the risk of contracting several diseases if he continues to maintain his observed dietary habits. Ailment: common cold Rating: quite possible % Deficient: Vitamin A 91.28 % Vitamin B6 25.00 % Vitamin C 85.55 % Vitamin D 100.00 % Ailment: rickets Rating: quite possible % Deficient: Vitamin D 100.00 % Calcium 94.13 % Phosphorus 35.19 % Ailment: scurvy Rating: quite possible % Deficient: Vitamin C 85.55 % Ailment: pellagra Rating: possible % Deficient: Vitamin B1 74.38 % Vitamin B2 52.00 % Niacin 9.84 % Table 4. A run of 'Ailment Prediction Based on Nutritional Deficiencies' Table 4 is a sample run of menu option three, 'Ailment Prediction Based on Nutritional Deficiencies'. Listed along with the possible ailments are the fuzzy ratings and percentage deficiencies justifying the possible ailment rating. This feature works quite efficiently due to the fact that each ailment is a rule. The left hand side of each ailment rule consists of nutrients that, if known to be consistently deficient in a person's diet, may have a causal relationship with the given ailment. These deficiencies are asserted into the knowledge base by running menu option two, 'Dietary Analysis of Meal(s)'. If all of the nutritional deficiencies are present, the ailment rule fires. The fuzzy rating explanation facility is based upon the average nutritional percentage deficiencies that were calculated in the 'Dietary Analysis of Meal(s)' feature. Ratings are assigned as shown in Table 5. Menu option four, ‘top-ten’, generates a top-ten listing of food items for a given nutrient. When you select this feature you will be presented with a seventeen item list (calories and sixteen other key nutrients) from which you are to select one item. Once you have selected the item of interest, a “Working...” status message will appear. Shortly thereafter, an ordered list of the top-ten foods for that particular nutrient will be displayed on the screen. Table six is a sample run of this menu option. <table> <thead> <tr> <th>Vitamin C (mg)</th> <th></th> </tr> </thead> <tbody> <tr> <td>01 orange</td> <td>90.00</td> </tr> <tr> <td>02 baked potato</td> <td>20.00</td> </tr> <tr> <td>03 banana</td> <td>15.00</td> </tr> <tr> <td>04 peas</td> <td>13.00</td> </tr> <tr> <td>05 corn flakes</td> <td>8.80</td> </tr> <tr> <td>06 pot pie</td> <td>6.81</td> </tr> <tr> <td>07 apple</td> <td>5.20</td> </tr> <tr> <td>08 eggnog</td> <td>3.00</td> </tr> <tr> <td>09 whole milk</td> <td>2.44</td> </tr> <tr> <td>10 apple pie</td> <td>1.35</td> </tr> </tbody> </table> Table 6. A run of ‘Top Ten’ Implementation of this feature was complicated by the need for a sorting algorithm. Once the user selects the nutrient of interest, the relevant index of the food item facts is known because, by design, there is a direct mapping between the list selection numbers and the food item template nutrition fields. A list is built consisting of all of the pertinent food item nutrient field values. Next, the numbers are recursively compared in pairs. Beginning at the head of the list, the smaller of the two numbers is removed to another temporary list while the larger of the two remains in the original list. This process is repeated until finally only one number remains in the original list, and that number is the largest number of the original list. That largest number is then asserted into a new ‘max’ list. This process is repeated ten times, each iteration using a new list that did not include the previously identified and removed maximum value. Finally, an ordered max list of the top-ten highest numbers is created and then used to match against the database of food item facts. These top-ten food items are then displayed in order on the screen along with their respective nutrient percentages. Menu option five, ‘Database Update’, provides the user with a convenient mechanism for updating the database of food item facts. When you select this feature you may either: (a) add a food item to the database; (b) retract a food item from the database. In order to add a food item to the database, you must be able to supply the necessary nutritional details for that food item. To retract a food item, you need only know the name of the food item. This feature is important because it makes the system flexible. The scope of the pristine database can easily be expanded and thereby improve overall performance. Also, database errors are also easily corrected through the retract and add options. Implementation of this useful feature was simple and direct. To add a food item fact to the database, the user-supplied information is simply asserted using the food item template. To retract a given food item, a match and retract is performed using the food item name. Table 7 shows a sample run for this feature. ``` How would you like to modify the data base? 1. Add a food item 2. Retract a food item Please select 1 or 2: 2 Enter food name: plum "plum" has been retracted from the database. ``` Table 7. A run of 'Database Update' The final menu option is 'Exit'. Select this option only if you want to exit from the Nutrition Advisor Expert System and be returned to the CLIPS' prompt. If you select this option inadvertently, simply type '(reset)' <RETURN>, and '(run)' <RETURN>. This will restart NAES. Implementation of the exit feature is accomplished quite simply through the system clear screen and halt commands. **FINAL REMARKS** The CLIPS expert system shell provided an excellent developmental environment for the exploration of automated knowledge-based reasoning in the application area of dietary analysis and nutritional guidance. The forward-chaining rule-based language provided inferencing and representation capabilities that allowed the programmers to create the code of the system with a very application-oriented architecture. Facts and rules in the knowledge base could easily be understood because their names directly related the functionality of the rule to the user. The built-in inference engine eliminated the need for the programmer to create any kind of a reasoning mechanism. Additional information can easily be added to the expert system by creating new rules and adding more facts. Through the use of familiar terms in the facts and rules and the elimination of building an inference engine, the CLIPS expert system shell allows a developer to create a full-blown expert system for practically any application in a relatively short period of time. **REFERENCES**
{"Source-Url": "https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19920007397.pdf", "len_cl100k_base": 4768, "olmocr-version": "0.1.49", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 16907, "total-output-tokens": 5145, "length": "2e12", "weborganizer": {"__label__adult": 0.0019407272338867188, "__label__art_design": 0.00078582763671875, "__label__crime_law": 0.001239776611328125, "__label__education_jobs": 0.007293701171875, "__label__entertainment": 0.00018727779388427737, "__label__fashion_beauty": 0.001026153564453125, "__label__finance_business": 0.00104522705078125, "__label__food_dining": 0.02386474609375, "__label__games": 0.0020847320556640625, "__label__hardware": 0.0059661865234375, "__label__health": 0.31005859375, "__label__history": 0.0003848075866699219, "__label__home_hobbies": 0.0008540153503417969, "__label__industrial": 0.00287628173828125, "__label__literature": 0.0011386871337890625, "__label__politics": 0.0006403923034667969, "__label__religion": 0.0020046234130859375, "__label__science_tech": 0.10992431640625, "__label__social_life": 0.0003886222839355469, "__label__software": 0.073486328125, "__label__software_dev": 0.44921875, "__label__sports_fitness": 0.0017042160034179688, "__label__transportation": 0.0011911392211914062, "__label__travel": 0.0006604194641113281}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20162, 0.06022]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20162, 0.26867]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20162, 0.86835]], "google_gemma-3-12b-it_contains_pii": [[0, 2725, false], [2725, 5624, null], [5624, 8169, null], [8169, 11199, null], [11199, 12712, null], [12712, 15168, null], [15168, 17483, null], [17483, 20162, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2725, true], [2725, 5624, null], [5624, 8169, null], [8169, 11199, null], [11199, 12712, null], [12712, 15168, null], [15168, 17483, null], [17483, 20162, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 20162, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20162, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20162, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20162, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20162, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20162, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20162, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20162, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20162, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20162, null]], "pdf_page_numbers": [[0, 2725, 1], [2725, 5624, 2], [5624, 8169, 3], [8169, 11199, 4], [11199, 12712, 5], [12712, 15168, 6], [15168, 17483, 7], [17483, 20162, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20162, 0.20541]]}
olmocr_science_pdfs
2024-11-26
2024-11-26
77474bb0f1af068d756be94c936844547c9bbdaa
Tutorial on Modeling VAT Rules Using OWL-DL Nielsen, Morten Ib; Simonsen, Jakob Grue; Larsen, Ken Friis Publication date: 2007 Document version Publisher's PDF, also known as Version of record Citation for published version (APA): Tutorial on Modeling VAT rules using OWL-DL Morten Ib Nielsen, Jakob Grue Simonsen and Ken Friis Larsen Department of Computer Science University of Copenhagen Email: {mortenib|simonsen|kflarsen}@diku.dk August 28, 2007 Total number of pages: 16 Abstract This paper reports on work in progress. We present a methodology for constructing an OWL-DL model of a subset of Danish VAT rules. It is our intention that domain experts without training in formal modeling or computer science should be able to create and maintain the model using our methodology. In an ERP setting such a model could reduce the Total Cost of Ownership (TCO) and increase the quality of the system. We have selected OWL-DL because we believe that description logic is suited for modeling VAT rules due to the decidability of important inference problems that are key to the way we plan to use the model and because OWL-DL is relatively intuitive to use. 1 Introduction Imagine an ERP system where domain experts can create and implement changes in e.g. VAT rules without the help of programmers. The benefits would be shorter development time and fewer mistakes due to misinterpretation of specifications which lead to reduced TCO and increased quality of the software. On a coarse-grained scale such a system consists of three parts: A model of the rules, a tool to edit the model and the core ERP system using the model. In this paper we focus on the first part - the model. A priori two requirements exist. First the modeling language must be strong enough to express the rules in question and second it must be easy to use without training in formal modeling or computer science. In a more general setting the model can be used as a VAT knowledge system which external programs can query through an interface. In the long run we envision that authorities such as SKAT (Danish tax administration) can provide online access to the model e.g. using web services such that applications always use the newest version of the model. In this paper we describe a methodology we have used to develop a model of a subset of Danish VAT rules using the general purpose Web Ontology Language (OWL) editor Protégé-OWL\(^1\) and we report on our experiences in doing so. We selected a subset of Danish VAT rules consisting of flat VAT (25%) plus a set of exceptions where goods and services are free of VAT, chosen because they seem representative. Further the rules are accessible to us by way of an official guideline by the Danish tax administration. Our study is focusing on the feasibility \(^1\)\url{http://protege.stanford.edu/overview/protege-owl.html}. 1 of using OWL to model VAT rules and not on the usability of the Protégé-OWL tool itself. By feasibility we mean how easy or difficult it is (for a human) to express and understand VAT rules in OWL, in particular this does not cover issues such as modularization. The methodology presented here is inspired by the article [1] together with our own experience. Readers of this guide are assumed to have user experience of Protégé-OWL corresponding to [2] but not of computer science nor of modeling in general. 1.1 Motivation One of the overall goals of the strategic research project 3gERP is to reduce the TCO of Enterprise Resource Planning (ERP) systems. We believe that a VAT model helps to this end in two ways. First we envision that domain experts create and update the model thus eliminating a layer of interpretation (the programmer) where errors can be introduced. Second a VAT model can change handling of VAT from being a customization task into being a configuration task, meaning that no code needs to be changed when the model is updated. VAT and legal rules in general deal with frequent transactions between legal entities. Transactions are typically triggered when certain conditions are fulfilled and therefore dynamic checks on these conditions are needed. The idea is to use the model to automatically infer what actions should be taken based on the conditions. In the case of VAT rules we can ask the model whether a delivery is subject to VAT or not based on the information we know about the delivery. The answer from the model will be Yes, No or Maybe and can be used to trigger an appropriate transaction. In a broader perspective the model is supposed to work as a VAT knowledge system that given a context and a question can tell other systems what to do, e.g. guide accounting systems and if required indicate that authorities should be contacted etc. 1.2 Roadmap The remainder of this paper is structured as follows. In Section 2 we give a short account of description logic and OWL. In Section 3, 4 and 5 we present our methodology by giving examples. Finally we outline future work in Section 6 and we conclude in Section 7. 2 Description Logic and OWL In this section we give a short introduction to description logic (DL) and OWL. This introduction can be skipped, if you are already familiar with the concepts. Description logics are knowledge representation languages that can be used to structure terminological knowledge in knowledge systems which are formally well-understood. A knowledge system typically consists of a knowledge base together with a reasoning service. The knowledge base is often split into a set of concept axioms the TBox, a set of assertions the Abox and a Role hierarchy. These constitute the explicit knowledge in the knowledge system. The reasoning service is a program that can check the consistency of the knowledge base and make implicit knowledge explicit, e.g. decide equivalence of concepts. Since the reasoning service is a pluggable component knowledge systems separate the technical task of reasoning from the problem of constructing the knowledge base. \footnote{In the case where insufficient information is provided in order to answer the question.} 2.1 OWL OWL which is short for Web Ontology Language is an ontology language designed to be compatible with the World Wide Web and the Semantic Web. The most important abstraction in OWL is concept axioms which are called classes. Each class has a list of necessary conditions and zero or more equivalent lists of necessary and sufficient conditions [2]. A list of necessary conditions is a list of conditions that every member of the class must satisfy. In the same way a list of necessary and sufficient conditions is a list of conditions that must be satisfied by every member of the class and if satisfied guarantees membership in the class. OWL is based on XML, RDF and RDF-S and can be used to represent information in a way that is more accessible to applications than traditional web pages. In addition OWL has a formal semantics, which enables logic reasoning. OWL comes in three variants: OWL-Lite \( \subseteq \) OWL-DL \( \subseteq \) OWL-Full of increasing expressive power. The variants OWL-Lite and OWL-DL are based on the description logics \( SHIF(D) \) and \( SHOIN(D) \) respectively [3], which guarantees that important inference problems such as satisfiability and subsumption are decidable. Since OWL is XML based we need an editor to create OWL ontologies. We have used the general purpose OWL editor Protégé developed by Stanford Medical Informatics at the Stanford University School of Medicine. 3 VAT Exemption 1: Sales outside EU Our methodology is aimed at modeling VAT rules as described in guidelines instead of the raw law text itself. This choice was made because guidelines are more accessible to us, and because these are the rules that small companies adhere to in practice. Further the investigation of the feasibility of using OWL to model VAT rules concerns the ease with which rules can be formalized and not so much from where the rules are extracted\(^3\). In what follows we refer to the guideline as the legal source. In order to ease reading we have used the word concept only when we speak about the legal source. The corresponding concept in the model (OWL) is called a class. A concept in the legal source is modeled as one or more classes in the model. Here we present the steps we took in order to make our model of Danish VAT rules. 3.1 Pre-modeling 1. Download Protégé-OWL from http://protege.stanford.edu/download/release/full/ and install. Make sure you can start Protégé in OWL-mode (logic view). When started and if you select the Class tab it should look like Figure 1. 2. Download [2] and read it. This is important because many of the constructions we use are explained herein. 3.2 Modeling First you must decide which legal source(s) you want to model. \(^3\)Since we have used the official guidelines by SKAT (Danish tax administration) we believe that the content of the guidelines is in accordance with the law. 3.2 Modeling VAT EXEMPTION 1: SALES OUTSIDE EU Figure 1: Protégé-OWL class-tab, logic view. 3.2 Modeling In our case we used the official guideline *Moms - fakturering, regnskab mv, E nr. 27, Version 5.2 digital, 19. januar 2005*. 3.2.1 Overall framework Modeling should start with a read through of the legal source. Based on this general (to be refined later) classes such as *Location*, *Goods*, *Services* and *FreeOfVAT* together with attributes such as *hasDeliveryType* and *hasSalesPrice* can be created as subclasses of the built-in top-level class *owl:Thing*. An attribute can usually take on at most a finite number of values. In that case we use value partitions to model them as described in [2][p. 73-76]. If the domain is not finite we use data type properties instead. Deciding on the overall framework helps to structure the capturing of rules in a homogeneous way and enables working in parallel (which can be needed if the legal source is large). After our read through of the legal source we arrived at the overall framework in Figure 2. ![Figure 2: Overall framework.](image) **Naming Convention.** All classes, properties, individuals etc. should be given names picked from or inspired of the legal source. All names should be in the same language as the legal source (in our case Danish). Using the naming convention supported by Protégé-OWL class and individual names should be written in Pascal Notation, e.g. *InternationalOrganization* not *internationalOrganization* or *International_Organization*, while property names are written in Camel Hump Notation, e.g. *someProperty*. Typically a property is used to assign an attribute to a class. In this case we prefix the name of the property with a verb describing the kind of relation the class has along that property, e.g. *hasNumberOfSides* or *isFragile*. 3.2.2 Rule modeling - step I Having modeled the overall framework it is time to go through the legal source one section at a time looking for rules that should be modeled. Here we give an elaborate description of how to model a single rule from the legal source starting from the overall framework in Figure --- 4 An exception is the domain of truth values, which is built-in as a data type. 3.2 Modeling Table 1 Extract from the legal source and its translation into English. <table> <thead> <tr> <th>Danish</th> <th>English</th> </tr> </thead> <tbody> <tr> <td>Salg til lande uden for EU (3. lande). Du skal ikke beregne moms af varer, du leverer til steder udenfor EU eller til Færøerne og Grønland. Det samme gælder normalt også for ydelser, men du skal dog opkræve moms af visse ydelser.</td> <td>Sales outside EU (3rd countries). No VAT should be added to goods delivered to destinations outside the European Union, or to the Faroe Islands or Greenland. This fact ordinarily also applies to services, but VAT should be added to certain services.</td> </tr> </tbody> </table> And translated into English: Sales outside EU (3rd countries). No VAT should be added to goods delivered to destinations outside the European Union, or to the Faroe Islands or Greenland. This fact ordinarily also applies to services, but VAT should be added to certain services. Translated from [4][p. 9] Table 2 Necessary & sufficient conditions for application of the rule in Table 1. - The rule concerns sales. - The rule concerns both goods and services. - The place of delivery must be outside the European Union, or the Faroe Islands or Greenland. 2. In Section 4 and 5 we give a brief description of how to model other rules. Together the modeling of these rules cover all the constructions we have used in our VAT model. Since our legal source is in Danish we present the rules in their original Danish phrasing together with a translation into English. Now let us consider the rule shown in Table 1. Since our model is only a prototype we make a slight simplification and assume that the rule also applies to all services. With this simplification we can identify the necessary and sufficient conditions for application of the rule. These are shown in Table 2. In order to model the necessary and sufficient conditions in Table 2 we must add some attributes to VarerOgYdelser. The first and second condition in Table 2 tell us that we must be able to model that goods and services are sold. We do that by adding an attribute to the class VarerOgYdelser (translates into GoodsAndServices) which already exists in our overall framework. Attributes are modeled using functional properties. In accordance with our naming convention we select the name harLeveranceType (translates into hasDeliveryType). Since there is a finite number of delivery types we model this attribute as a value partition, i.e. an enumeration. Value partitions can be created using a built-in wizard. Just as in [2] we store value partitions as subclasses of the class ValuePartitions. The reason plain enumerations are not used is that they cannot be sub-partitioned. Using value partitions we retain the possibility of further refining the concepts the value partitions model. --- 5 Instead of being sold goods can also be used as e.g. a trade sample. See [4][p. 8-9] for other examples. 6 Menu ► Tools ► Patterns ► Value Partition.... Remark. Technically enumerations are constructed by defining a class in terms of a finite set of individuals plus a functional property that has this class as its range. Since individuals are atoms they cannot be subdivided. On the other hand a value partition is defined using a functional property having as its range a class defined as the union of its subclasses all of which are distinct. These subclasses can (because they are classes) be partitioned into more subclasses if needed. Having created the value partition harLeveranceType which can have Salg (translates into Sale) as a value we need to add it as an attribute to the class VarerOgYdelser. This is done by adding to the necessary conditions an existential quantification over the corresponding property having the value partition (or data type in case of data type attribute) as its range. Thus we add $\exists$ harLeveranceType some LeveranceType to VarerOgYdelser. The third condition tells us that we must be able to model that goods and services have a place of delivery. A read through of the legal source tells us that only three places are needed namely Denmark, EU and non-EU. Thus this attribute which we name harLeveranceSted (translates into hasPlaceOfDelivery) must be modeled as a value partition. Having modeled these attributes the class VarerOgYdelser looks as shown in Figure 3. 3.2.3 Rule modeling - step II Now we are ready to model the rule itself. Since the rule describes a situation where you do not have to pay VAT we model it as a subclass of Momsfritaget (translates into FreeOfVAT). Following our naming convention we name the class MomsfritagetSalgAfVarerOgYdelserTilIkke-EU (translates into VATFreeSalesOfGoodsAndServicesInNon-EU). Then we add a textual description of the rule and a reference to where in the legal source the rule stems from to the rdfs:comment field. Next we must specify necessary and sufficient conditions on membership in MomsfritagetSalgAfVarerOgYdelserTilIkke-EU. It is important to remember that if a class has two sets of necessary and sufficient conditions then they must imply each other, see [2][p. 98]. Based on the necessary and sufficient conditions captured in Table 2 we add the following necessary and sufficient conditions to MomsfritagetSalgAfVarerOgYdelserTilIkke-EU: - VarerOgYdelser - $\exists$ harLeveranceSted some Ikke-EU - $\exists$ harLeveranceType some Salg The result is shown in Figure 4. 4 VAT Exemption 2: Sales to Embassies In this section and onwards we will not mention when to add references to the legal source in rdfs:comment fields of classes and properties. The rule of thumb is that this should always be done. Now let us consider the rule in Table 3. We identify the necessary and sufficient conditions for application of the rule. These are shown in Table 4. Figure 3: Class and property view after adding attributes. Sales to embassies. VAT should not be added to goods and transport services delivered to embassies and international organizations in countries within the European Union. Translated from [4][p. 9] Table 4 Necessary & Sufficient conditions for application of the rule in Table 3. - The rule concerns sales. - The rule concerns goods and transport services. - The place of delivery must be in the European Union. - The buyer must be an embassy or an international organization. 4.1 Rule modeling - step I We are already able to model that the rule concerns sale and that the place of delivery must be in EU. We cannot model the specific service transportation yet. Therefore we must add it to our model. Since it is a service it should be modeled as a subclass of Services. We name the class modeling the service transportation Transport (translates into Transportation). Now we can model that something belongs to the set of goods and transport services by requiring membership of Varer \sqcup Transport. Finally we must be able to model that the buyer is an embassy or an international organization. Since there are only finitely many different kinds of buyers we model this as a value partition, and because this attribute applies to both Varer and Transport we add it to their most specific common super-class which is VarerOgYdelser. We name this attribute harKøberType (translates into hasKindOfBuyer). After having done all this the model looks as shown in Figure 5. 4.2 Rule modeling - step II Having added all the necessary classes and attributes to the model we are ready to model the rule itself. Since the rule describes a situation where you do not have to pay VAT we model it as a subclass of Momsfritaget. Following our naming convention we name the class MomsfritagetSalgTilAmbassaderOgInternationaleOrganisationerIEU (translates into VATFreeSalesToEmbassiesAndInternationalOrganizationsInEU). Based on the necessary and sufficient conditions captured in Table 4 we add the following necessary and sufficient conditions to MomsfritagetSalgTilAmbassaderOgInternationaleOrganisationerIEU: - harLeveranceType some Salg - Varer \sqcup Transport - harLeveranceSted some EU - harKøberType some AmbassadeOgPersonaleMedDiplomatiskeRettigheder The result is shown in Figure 6. 5 VAT Exemption 3: Sales in other EU countries In this section we consider one final rule, the rule in Table 5. We identify the necessary and sufficient conditions for application of the rule. These are shown in Table 6. 5.1 Rule modeling - step I We are already able to model that the rule concerns sale of goods delivered inside the European Union. The new thing is that we must be able to indicate whether a buyer is registered for VAT and if so, we must register the buyers VAT registration number. We use a functional data type property named erKøberMomsregistreret (translates into isTheBuyerRegisteredForVAT) with the data type xsd:boolean as its range to model whether the buyer is registered for VAT. Similarly we use a functional data type property named erKøbersMomsnummer (translates into isBuyersVATRegistrationNumber) with the data type xsd:string as its range to register the buyers VAT registration number if he has one. 5.1 Rule modeling - step 3 VAT EXEMPTION 3: SALES IN OTHER EU COUNTRIES Figure 5: The model after adding classes and attributes as described in Section 4.1. 5.1 Rule modeling - step 5 VAT EXEMPTION 3: SALES IN OTHER EU COUNTRIES Figure 6: Asserted Conditions of our model of the legal rule in Table 3. Table 5 Extract from the legal source and its translation into English. [4][p. 8] And translated into English: Sales in other EU countries. No VAT should be added to goods delivered to companies in other EU countries, provided that the companies are registered for VAT. In this case you must acquire the VAT registration number of the company. Translated from [4][p. 8] Table 6 Necessary & Sufficient conditions for application of the rule in Table 5. - The rule concerns sales. - The rule concerns goods. - The place of delivery must be in the European Union. - The buyer must be registered for VAT. - You must acquire the VAT registration number of the company. 5.2 Rule modeling - step II Having added the necessary attributes to the model we are ready to model the rule itself. Since the rule describes a situation where you do not have to pay VAT we model it as a subclass of Momsfritaget. Following our naming convention we name the class MomsfritagetSalgTilAndreEU-lande (translates into VATFreeSalesToOtherEUCountries). Based on the necessary and sufficient conditions captured in Table 6 we add the following necessary and sufficient conditions to MomsfritagetSalgTilAndreEU-lande: - harLeveranceType some Salg - Varer - harLeveranceSted some EU - erKøberMomsregistereret has true We note that the obligation to register the buyers VAT registration number is modeled indirectly, see Section 5.1. The result is shown in Figure 8. 6 Future work Since this is work in progress there are a lot of areas we need to address. In the near future we plan to integrate our model in a prototype ERP system as described in the introduction. This opens the possibility for modeling the parts of the Danish VAT legislation concerning depreciation and VAT reporting (since they are intertwined and contain a lot of technical requirements on the financial reports). We also need to model other countries VAT rules in order to confirm that Danish VAT rules are indeed representative with respect to the constructions that are needed in the modeling language. Based on this we need to refine our overall framework such that it captures the common structure and we need to identify what kinds of questions a model must be able to answer. The synthesized knowledge from modeling the VAT rules of other countries should also result in a more detailed analysis of what we can and cannot model. Based on all this we should design a minimal description logic extended with the needed functionality identified in the analysis just mentioned, such as predicates like \( x < 100 \) which are needed in some rules. We should also provide a reasoner for the logic together with an editor such that the above process can be repeated. Finally in order to compare our OWL model with a different approach we want to make a model using Datalog, which is the de facto standard language used to express rules in deductive databases, of the rules we have formalized in OWL already. It would also be interesting to try a hybrid solution e.g. OWL plus a rule language like SWRL. This work is independent of the tasks mentioned above and can be carried out in parallel. 7 Conclusion We have shown how to model a subset of Danish VAT rules concerning exemption from VAT using Protégé-OWL. First we created an overall framework for the VAT model with the property that legal rules and the concepts they involve can be modeled as subclasses of existing classes in the framework. This helps to ensure that related concepts are modeled in the same way and that a single concept is not modeled twice. The second step was an iterative process consisting of two steps repeated for each rule. The first step is to extend the model such that the rule in question can be modeled. This is done by modeling concepts from the legal source as classes in the model and by adding attributes to the necessary conditions of such classes. The second step is to model the rule itself. This is done by adding specific requirements for application of the rule to the necessary and sufficient conditions of the class modeling the rule. The step by step iterative modeling has been working fine in practice and an extension to cover several different VAT and duty rates does not seem to be problematic as long as they do not require us to model restrictions such as \( x < 100 \) which is not supported directly in OWL. Whether this is a weakness of OWL, or just us trying to use OWL for something it was not designed to Apart from modeling inequalities we have not had modeling problems. One problem though is that reasoning about individuals in OWL models is not supported very well. Therefore we have tried to avoid the use of individuals wherever possible (using value partitions). References
{"Source-Url": "https://static-curis.ku.dk/portal/files/15432526/nielsen-simonsen-larsen.pdf", "len_cl100k_base": 5592, "olmocr-version": "0.1.49", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 30345, "total-output-tokens": 6588, "length": "2e12", "weborganizer": {"__label__adult": 0.0005903244018554688, "__label__art_design": 0.0007872581481933594, "__label__crime_law": 0.0045013427734375, "__label__education_jobs": 0.00295257568359375, "__label__entertainment": 0.0001633167266845703, "__label__fashion_beauty": 0.0003402233123779297, "__label__finance_business": 0.017120361328125, "__label__food_dining": 0.0005559921264648438, "__label__games": 0.0009765625, "__label__hardware": 0.000850677490234375, "__label__health": 0.0010156631469726562, "__label__history": 0.0005397796630859375, "__label__home_hobbies": 0.0002548694610595703, "__label__industrial": 0.0014133453369140625, "__label__literature": 0.0008215904235839844, "__label__politics": 0.0013380050659179688, "__label__religion": 0.0004968643188476562, "__label__science_tech": 0.1224365234375, "__label__social_life": 0.0002105236053466797, "__label__software": 0.05792236328125, "__label__software_dev": 0.78271484375, "__label__sports_fitness": 0.00028204917907714844, "__label__transportation": 0.0013561248779296875, "__label__travel": 0.0003349781036376953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26539, 0.04587]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26539, 0.5063]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26539, 0.90676]], "google_gemma-3-12b-it_contains_pii": [[0, 442, false], [442, 3077, null], [3077, 6309, null], [6309, 9194, null], [9194, 9288, null], [9288, 11435, null], [11435, 14633, null], [14633, 17458, null], [17458, 17517, null], [17517, 17996, null], [17996, 20748, null], [20748, 20906, null], [20906, 21911, null], [21911, 23071, null], [23071, 25723, null], [25723, 25988, null], [25988, 26539, null]], "google_gemma-3-12b-it_is_public_document": [[0, 442, true], [442, 3077, null], [3077, 6309, null], [6309, 9194, null], [9194, 9288, null], [9288, 11435, null], [11435, 14633, null], [14633, 17458, null], [17458, 17517, null], [17517, 17996, null], [17996, 20748, null], [20748, 20906, null], [20906, 21911, null], [21911, 23071, null], [23071, 25723, null], [25723, 25988, null], [25988, 26539, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26539, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26539, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26539, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26539, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26539, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26539, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26539, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26539, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26539, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26539, null]], "pdf_page_numbers": [[0, 442, 1], [442, 3077, 2], [3077, 6309, 3], [6309, 9194, 4], [9194, 9288, 5], [9288, 11435, 6], [11435, 14633, 7], [14633, 17458, 8], [17458, 17517, 9], [17517, 17996, 10], [17996, 20748, 11], [20748, 20906, 12], [20906, 21911, 13], [21911, 23071, 14], [23071, 25723, 15], [25723, 25988, 16], [25988, 26539, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26539, 0.02098]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
f9bfc5784c3599ca434eb91dbbe22b2fd7231c35
Tables, Priority Queues, Heaps • Table ADT – purpose, implementations • Priority Queue ADT – variation on Table ADT • Heaps – purpose, implementation – heapsort Table ADT • A table in generic terms has M columns and N rows – each row contains a separate record – each column contains a different component, or field, of the same record • Each table, or set of data, is also generally sorted, or accessed, by a key record component – a single set of data can be organized into several different tables, sorted according to different keys • Another common terms is a dictionary, whose entries are records, inserted and accessed according to a key value – key may be a field in the record or not – may also be used as frontends for data base access The ADT table, or dictionary - Uses a search key to identify its items - Its items are records that contain several pieces of data <table> <thead> <tr> <th>City</th> <th>Country</th> <th>Population</th> </tr> </thead> <tbody> <tr> <td>Athens</td> <td>Greece</td> <td>2,500,000</td> </tr> <tr> <td>Barcelona</td> <td>Spain</td> <td>1,800,000</td> </tr> <tr> <td>Cairo</td> <td>Egypt</td> <td>9,500,000</td> </tr> <tr> <td>London</td> <td>England</td> <td>9,400,000</td> </tr> <tr> <td>New York</td> <td>U.S.A.</td> <td>7,300,000</td> </tr> <tr> <td>Paris</td> <td>France</td> <td>2,200,000</td> </tr> <tr> <td>Rome</td> <td>Italy</td> <td>2,800,000</td> </tr> <tr> <td>Toronto</td> <td>Canada</td> <td>3,200,000</td> </tr> <tr> <td>Venice</td> <td>Italy</td> <td>300,000</td> </tr> </tbody> </table> ADT Table – Operations • A simple and obvious set of operations can be used for a wide range of program activities – Create and Destroy Table instance – Determine the number of items including zero – Insert an item in a table using a key value – Delete an item with a given key value – Retrieve an item with a given key value – Retrieve the items in the table (sorted or unsorted) • Entries with identical key values maybe forbidden, but can be handled with a little imagination The ADT Table - **void tableInsert(ItemType& item):** - store item under its key - **boolean tableDelete(KeyType key_value):** - delete item with key == key_value, if present - **ItemType* tableRetrieve(KeyType key_value):** - return pointer to item with key==key_value - **void traverseTable(Functor visitor):** - Functor: a function-object, much like a fn pointer - visitor is executed for each node in table The ADT Table • Our table assumes distinct search keys – other tables could allow duplicate search keys • The `traverseTable` operation visits table items in a specified order – one common order is by sorted search key – a client-defined visit function is supplied as an argument to the traversal • called once for each item in the table Selecting an Implementation • Linear implementations: Four categories – Unsorted: array based or pointer based – Sorted (by search key): array based or pointer based Figure 11-3 The data members for two sorted linear implementations of the ADT table for the data in Figure 11-1: (a) array based; (b) pointer based Selecting an Implementation • Nonlinear implementations – Binary search tree implementation • Offers several advantages over linear implementations \[ \text{Figure 11-4} \text{ The data members for a binary search tree implementation of the ADT table for the data in Figure 11-1} \] Selecting an Implementation • The requirements of a particular application influence the selection of an implementation – Questions to be considered about an application before choosing an implementation • What operations are needed? • How often is each operation required? • Are frequently used operations efficient given a particular implementation? Comparing Linear Implementations • Unsorted array-based implementation – Insertion is made efficiently after the last table item in an array – Deletion usually requires shifting data – Retrieval requires a sequential search *Figure 11-5a* Insertion for unsorted linear implementations: array based Comparing Linear Implementations • Sorted array-based implementation – Both insertions and deletions require shifting data – Retrieval can use an efficient binary search Figure 11-6a Insertion for sorted linear implementations: array based Comparing Linear Implementations • Unsorted pointer-based implementation – No data shifts – Insertion is made efficiently at the beginning of a linked list – Deletion requires a sequential search – Retrieval requires a sequential search *Figure 11-5b* Insertion for unsorted linear implementations: pointer based • Sorted pointer-based implementation – No data shifts – Insertions, deletions, and retrievals each require a sequential search ![Diagram](image) *Figure 11-6b* Insertion for sorted linear implementations: pointer based Selecting an Implementation • Linear – Easy to understand conceptually – May be appropriate for small tables or unsorted tables with few deletions • Nonlinear – Is usually a better choice than a linear implementation – A balanced binary search tree • Increases the efficiency of the table operations ### Selecting an Implementation - **Unsorted array based** - Insertion: $O(1)$ - Deletion: $O(n)$ - Retrieval: $O(n)$ - Traversal: $O(n)$ - **Unsorted pointer based** - Insertion: $O(1)$ - Deletion: $O(n)$ - Retrieval: $O(n)$ - Traversal: $O(n)$ - **Sorted array based** - Insertion: $O(n)$ - Deletion: $O(n)$ - Retrieval: $O(\log n)$ - Traversal: $O(n)$ - **Sorted pointer based** - Insertion: $O(n)$ - Deletion: $O(n)$ - Retrieval: $O(n)$ - Traversal: $O(n)$ - **Binary search tree** - Insertion: $O(\log n)$ - Deletion: $O(\log n)$ - Retrieval: $O(\log n)$ - Traversal: $O(n)$ *Figure 11-7* The average-case order of the ADT table operations for various implementations Selecting an Implementation for a Particular Application • Frequent insertions and infrequent traversals in no particular order – Unsorted linear implementation • Frequent retrievals – Sorted array-based implementation • Binary search – Balanced binary search tree • Frequent retrievals, insertions, deletions, traversals – Binary search tree (preferably balanced) Generalized Data Set Management • Problem of managing a set of data items occurs many times in many contexts – arbitrary set of data represented by an arbitrary key value within the set • Strict separation of the set of data from the key helps with abstraction and generalization • Data Set – class or structure defined in application terms • Container class – STL terminology – holds key and data set items Keyed Base Class • Create base class for associating *key* with an arbitrary item • Maintains key outside the item fields • Rows of Table are derived classes of this class • Inserting item in Table creates instance of derived class and stores it under key ```cpp #include <string> using namespace std; typedef stringKeyType; class KeyedItem { public: KeyedItem() {} KeyedItem(const KeyType& keyValue) : searchKey(keyValue) {} KeyType getKey() const { return searchKey; } private: KeyType searchKey; }; ``` Table Item Class - Create table of cities indexed by city name - Might create *struct* for each city - name, popu., country - Or, might derive this class from KeyedItem - Delegates chosen key to base class storage ```cpp class City : public KeyedItem { public: City() : KeyedItem() {} City(const string& name, const string& ctry, const int& num) : KeyedItem(name), country(ctry), pop(num) {} string cityName() const; int getPopulation() const; void setPopulation(int newPop); private: // city's name is search-key value string country; int pop; }; ``` A Sorted Array-Based Implementation of the ADT Table - Default constructor and virtual destructor - Copy constructor supplied by the compiler - Has a typedef declaration for a “visit” function - Public methods are virtual - Protected methods: setSize, setItem, and position A Binary Search Tree Implementation of the ADT Table - **Reuses** `BinarySearchTree` - An instance is a private data member - Default constructor and virtual destructor - Copy constructor supplied by the compiler - Public methods are virtual - Protected method: `setSize` Priority Queue • Binary Search Tree is an excellent data structure, but not always – simple in concept and implementation – BST supports many useful operations well • insert, delete, deleteMax, deleteMin, search, searchMax, searchMin, sort – efficient average case behavior $T(n) = O(\log n)$ • However, BST is not good in all respects for all purposes – brittle with respect to balance – worst case $T(n) = O(n)$ • Balanced Trees are possible but more complex Priority Queue - Priority Queue semantics are useful when items are added to the set in arbitrary order, but are removed in either ascending or descending priority order - priority can have a flexible definition - any property of the set elements imposing a total order on the set members - If only a partial order is imposed (multiple items with equal priority) a secondary tiebreaking rule can be used to create a total order Priority Queue • The deletion operation for a priority queue is different from the one for a table – general ‘delete’ operation is not supported – item removed is the one having the highest priority value • Priority queues do not have retrieval and traversal operations ADT Priority Queue <table> <thead> <tr> <th>PriorityQueue</th> </tr> </thead> <tbody> <tr> <td>items</td> </tr> <tr> <td>createPriorityQueue()</td> </tr> <tr> <td>destroyPriorityQueue()</td> </tr> <tr> <td>pqIsEmpty()</td> </tr> <tr> <td>pqInsert()</td> </tr> <tr> <td>pqDelete()</td> </tr> </tbody> </table> **Figure 11-8** UML diagram for the class *PriorityQueue* The ADT Priority Queue: Possible Implementations • Sorted linear implementations – Appropriate if the number of items in the priority queue is small – Array-based implementation • Maintains the items sorted in ascending order of priority value • items[size - 1] has the highest priority *Figure 11-9a* An array-based implementation of the ADT priority queue The ADT Priority Queue: Possible Implementations • Sorted linear implementations (continued) – Pointer-based implementation • Maintains the items sorted in descending order of priority value • Item having the highest priority is at beginning of linked list *Figure 11-9b* A pointer-based implementation of the ADT priority queue The ADT Priority Queue: Possible Implementations - Binary search tree implementation - Appropriate for any priority queue - Largest item is rightmost and has at most one child *Figure 11-9c* A binary search tree implementation of the ADT priority queue The ADT Priority Queue: Heap Implementation • A heap is a complete binary tree – that is empty, OR – whose root contains a search key >= the search key in each of its children, and whose root has heaps as its subtrees • Heap is the best approach because it is the most efficient for the specific PQ semantics • Heap provides a partially ordered tree – avoids brittleness of BST and has lower overhead than balanced search trees Heaps • Note: – The search key in each heap node is $\geq$ the search keys in each of the node’s children – The search keys of a node’s children have no required relationship Heaps • A maximum, binary, heap H is a complete binary tree satisfying the *heap-ordered* tree property: – *Complete*: Every level complete, except possibly the last, and all leaves are as far left as possible – *Heap Ordered*: Priority of any node is $\geq$ priority of all its descendants – maximum element of set is thus at root • A minimum heap ensures that all nodes have priority values $\leq$ all its descendants – minimum element at root Heap – ADT <table> <thead> <tr> <th>Heap</th> </tr> </thead> <tbody> <tr> <td><strong>items</strong></td> </tr> </tbody> </table> | createHeap() | | destroyHeap() | | heapIsEmpty() | | heapInsert() | | heapDelete() | *Figure 11-10 UML diagram for the class Heap* Heap – Implementation • Considering typical heap operations, for example, insert into heap • Result must be a complete tree satisfying the heap property that all nodes are \( \geq \) descendants • Two step insert process works well – insert the new item in the next “open” slot for keeping \( H \) a complete binary tree – restructure \( H \) to make it satisfy the heap-ordered property • Two step remove – client code save root value for use – Replace root with “last” node in level-order – Restructure \( H \) to migrate/percolate new root to the correct tree location Heap – Implementation - Traversal of the inserted node to its proper place requires at most $O(\log n)$ operations -- since the height of a complete binary tree is $O(\log n)$ • **Deletion** is similar – always deletes the root of the tree, left with two disjoint subtrees – place item in last node in the root – out of place item in root node should percolate down to its proper position – $O(\log n)$ Heap – Implementation • Data structure suitable for heap implementation must – support efficient determination of where next and last slots in a complete tree are located for insert and delete, respectively – support efficient percolation of misplaced nodes • Percolation down is simple using standard child references and comparison of parent to child values • Percolation up is almost as simple, but requires a parent reference at each node • Knowing the last occupied and next open slots under different data structures is more subtle under some data structures than others Heap – Implementation • Pointer based heaps require two child and one parent pointer at each node – can use additional state information to track location of next and last complete tree slots • Array based heap implementation simplifies parent and child references by making them calculated – lowers space overhead – not clear execution time would be lower • array index calculation vs. pointer access • Similarly, location of the next and last slots for the complete tree can be calculated from the number of nodes in the tree, which is simple to track Heap – Array Implementation • In an array representation of a binary tree T – Root of T is at A[0] – parent of a node A[i] is at A[(i-1)/2] – for n>1, A[i] is a leaf iff 2i>n – in a heap with n elements the last element of the complete binary tree is at A[n-1] and the next element (element n+1) will be added at A[n] Heap – Array Implementation • An array-based representation is attractive – need to know the heap’s maximum size • Constant MAX_HEAP • Data members – items: an array of heap items – size: an integer equal to the current number of items in the heap Heap – Array Implementation - heapDelete operation with arrays - Step 1: Return the item in the root - rootItem = items[0] Figure 11-12a Disjoint heaps (a) Heap – Array Implementation • Step 2: Copy the item from the last node into the root: items[0]= items[size-1] • Step 3: Remove the last node: --size – Results in a semiheap ![Figure 11-12b](image) A semiheap (b) Heap – Array Implementation • Step 3: Transform the semi-heap back into a heap – use the recursive algorithm heapRebuild – the root value trickles down the tree until it is not out of place • if the root has a smaller search key than the larger of the search keys of its children, swap the item in the root with that of the larger child A Heap Implementation of the ADT Priority Queue • Priority-queue operations and heap operations are analogous – the priority value in a priority-queue corresponds to a heap item’s search key • One implementation – has an instance of the Heap class as a private data member – methods call analogous heap operations A Heap Implementation of the ADT Priority Queue – disadvantage • requires the knowledge of the priority queue’s maximum size – advantage • a heap is always balanced • Another implementation – a heap of queues – useful when a finite number of distinct priority values are used, which can result in many items having the same priority value Heapsort • Strategy – transform the array into a heap – remove the heap's root (the largest element) by exchanging it with the heap’s last element – transforms the resulting semiheap back into a heap Heapsort Figure 11-17 Transforming the array anArray into a heap Heapsort • Compared to mergesort – both heapsort and mergesort are $O(n \times \log n)$ in both the worst and average cases – however, heapsort does not require second array • Compared to quicksort – quicksort is $O(n \times \log n)$ in the average case – it is generally the preferred sorting method, even though it has poor worst-case efficiency : $O(n^2)$ Summary • The ADT table supports value-oriented operations • The linear implementations (array based and pointer based) of a table are adequate only in limited situations – when the table is small – for certain operations • A nonlinear pointer based (binary search tree) implementation of the ADT table provides the best aspects of the two linear implementations – dynamic growth – insertions/deletions without extensive data movement – efficient searches Summary • A priority queue is a variation of the ADT table – its operations allow you to retrieve and remove the item with the largest priority value • A heap that uses an array-based representation of a complete binary tree is a good implementation of a priority queue when you know the maximum number of items that will be stored at any one time Summary • Heapsort, like mergesort, has good worst-case and average-case behaviors, but neither sort is as good as quicksort in the average case. • Heapsort has an advantage over mergesort in that it does not require a second array.
{"Source-Url": "http://www.ittc.ku.edu/~kulkarni/teaching/EECS268/slides/chap11.pdf", "len_cl100k_base": 4448, "olmocr-version": "0.1.50", "pdf-total-pages": 50, "total-fallback-pages": 0, "total-input-tokens": 68862, "total-output-tokens": 6135, "length": "2e12", "weborganizer": {"__label__adult": 0.0003440380096435547, "__label__art_design": 0.00023674964904785156, "__label__crime_law": 0.0002980232238769531, "__label__education_jobs": 0.00026535987854003906, "__label__entertainment": 3.8504600524902344e-05, "__label__fashion_beauty": 0.00012552738189697266, "__label__finance_business": 0.00011664628982543944, "__label__food_dining": 0.0004014968872070313, "__label__games": 0.0005440711975097656, "__label__hardware": 0.0007872581481933594, "__label__health": 0.0002923011779785156, "__label__history": 0.00019299983978271484, "__label__home_hobbies": 7.277727127075195e-05, "__label__industrial": 0.00028586387634277344, "__label__literature": 0.00014472007751464844, "__label__politics": 0.0002111196517944336, "__label__religion": 0.0003364086151123047, "__label__science_tech": 0.00321197509765625, "__label__social_life": 5.716085433959961e-05, "__label__software": 0.00348663330078125, "__label__software_dev": 0.98779296875, "__label__sports_fitness": 0.0002970695495605469, "__label__transportation": 0.0003910064697265625, "__label__travel": 0.0001957416534423828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 17718, 0.00772]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 17718, 0.41454]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 17718, 0.81697]], "google_gemma-3-12b-it_contains_pii": [[0, 172, false], [172, 770, null], [770, 1321, null], [1321, 1814, null], [1814, 2239, null], [2239, 2589, null], [2589, 2910, null], [2910, 3201, null], [3201, 3568, null], [3568, 3874, null], [3874, 4121, null], [4121, 4444, null], [4444, 4670, null], [4670, 4984, null], [4984, 5703, null], [5703, 6083, null], [6083, 6503, null], [6503, 7048, null], [7048, 7624, null], [7624, 7899, null], [7899, 8174, null], [8174, 8650, null], [8650, 9085, null], [9085, 9361, null], [9361, 9601, null], [9601, 9973, null], [9973, 10314, null], [10314, 10573, null], [10573, 11010, null], [11010, 11190, null], [11190, 11647, null], [11647, 11827, null], [11827, 12410, null], [12410, 12587, null], [12587, 12822, null], [12822, 13420, null], [13420, 13985, null], [13985, 14375, null], [14375, 14630, null], [14630, 14791, null], [14791, 15007, null], [15007, 15353, null], [15353, 15675, null], [15675, 16025, null], [16025, 16232, null], [16232, 16298, null], [16298, 16667, null], [16667, 17134, null], [17134, 17484, null], [17484, 17718, null]], "google_gemma-3-12b-it_is_public_document": [[0, 172, true], [172, 770, null], [770, 1321, null], [1321, 1814, null], [1814, 2239, null], [2239, 2589, null], [2589, 2910, null], [2910, 3201, null], [3201, 3568, null], [3568, 3874, null], [3874, 4121, null], [4121, 4444, null], [4444, 4670, null], [4670, 4984, null], [4984, 5703, null], [5703, 6083, null], [6083, 6503, null], [6503, 7048, null], [7048, 7624, null], [7624, 7899, null], [7899, 8174, null], [8174, 8650, null], [8650, 9085, null], [9085, 9361, null], [9361, 9601, null], [9601, 9973, null], [9973, 10314, null], [10314, 10573, null], [10573, 11010, null], [11010, 11190, null], [11190, 11647, null], [11647, 11827, null], [11827, 12410, null], [12410, 12587, null], [12587, 12822, null], [12822, 13420, null], [13420, 13985, null], [13985, 14375, null], [14375, 14630, null], [14630, 14791, null], [14791, 15007, null], [15007, 15353, null], [15353, 15675, null], [15675, 16025, null], [16025, 16232, null], [16232, 16298, null], [16298, 16667, null], [16667, 17134, null], [17134, 17484, null], [17484, 17718, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 17718, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 17718, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 17718, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 17718, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 17718, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 17718, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 17718, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 17718, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 17718, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 17718, null]], "pdf_page_numbers": [[0, 172, 1], [172, 770, 2], [770, 1321, 3], [1321, 1814, 4], [1814, 2239, 5], [2239, 2589, 6], [2589, 2910, 7], [2910, 3201, 8], [3201, 3568, 9], [3568, 3874, 10], [3874, 4121, 11], [4121, 4444, 12], [4444, 4670, 13], [4670, 4984, 14], [4984, 5703, 15], [5703, 6083, 16], [6083, 6503, 17], [6503, 7048, 18], [7048, 7624, 19], [7624, 7899, 20], [7899, 8174, 21], [8174, 8650, 22], [8650, 9085, 23], [9085, 9361, 24], [9361, 9601, 25], [9601, 9973, 26], [9973, 10314, 27], [10314, 10573, 28], [10573, 11010, 29], [11010, 11190, 30], [11190, 11647, 31], [11647, 11827, 32], [11827, 12410, 33], [12410, 12587, 34], [12587, 12822, 35], [12822, 13420, 36], [13420, 13985, 37], [13985, 14375, 38], [14375, 14630, 39], [14630, 14791, 40], [14791, 15007, 41], [15007, 15353, 42], [15353, 15675, 43], [15675, 16025, 44], [16025, 16232, 45], [16232, 16298, 46], [16298, 16667, 47], [16667, 17134, 48], [17134, 17484, 49], [17484, 17718, 50]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 17718, 0.06853]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
0dca50788cf26d3dda605176933c5e669b04312c
Language-Extension-Based Vectorizing Compiling Scheme on SDR-DSP Xiaoqiang Ni(✉), Liu Yang, and Chiyuan Ma School of Computer, National University of Defense Technology, Deya Street 109, Changsha 410073, People’s Republic of China xiaoqiangni@nudt.edu.cn Abstract. In this paper we propose a Language-Extension-based Vectorizing Compiling Scheme (LEVCS) for a newly developed DSP. The DSP is mainly designed for Software-Defined Radio (SDR) and is called SDR-DSP. The SDR-DSP architecture mixes the styles of VLIW (Very Long Instruction Word) and SIMD (Single Instruction Multiple Data). To explore the potential of SDR-DSP and achieve high performance, vectorization is one of the must equipped critical methods. Because auto-vectorization techniques cannot satisfy the requirements of the typical application, LEVCS is used to direct the vectorization. The C-extending programming language used in LEVCS is called SDR-DSP-C. LEVCS uses flexible data reorganization to make vectorization on SDR-DSP more efficient. We use LEVCS to vectorize five benchmark kernels: Fast Fourier Transform (FFT), Finite Impulse Response filter (FIR) and Infinite Impulse Response filter (IIR), Dot product implementation (Dotprod), Sum of vectors (vecsum). Experiment results show that LEVCS is functional correct and can achieve 2.883–8.074 speedups comparing to TI-DSPs. Keywords: SIMD · VLIW · SDR-DSP · Vectorizing compiling scheme 1 Introduction With the development of wireless communication techniques, the performance requirement of DSP becomes higher and higher. Software-defined radio (SDR) is a new communication technique, which implements radio functions in software. SDR becomes popular because it meets the trend for better flexibility and scalability [1]. To meet the requirements of high-throughput and low-power, SDR processor requires more complicated architecture than traditional digital signal processor [2]. SIMD processing becomes one of the main architecture of DSP to meet real-time performance requirements of SDR solutions [4]. We design and develop a new DSP architecture for high performance applications, which is named SDR-DSP. It has a new instruction set and it is based on VLIW [6] and SIMD [7]. It includes two processing units, which are called SU (scalar unit) and VU (vector unit). SDR-DSP also has FPU (floating point units) and it can support single floating-point and double floating-point efficiently. Today most DSP applications is implemented using a combination of both C code and assembly code. For the critical code which is important to performance, DSP programmers use highly optimized assembly code [11]. The DSP compilers always supply libraries written in assembly code to support SIMD application. The CEVA-X C family of DSP cores features a combination of VLIW and Vector engines that enhance typical DSP capabilities with advanced vector processing. The CEVA-XC4000 is the third generation of the CEVA-XC family [9]. Its VLIW architecture shares many similarities with TI’s C64x DSP family [8] but only supports fixed-point computation. The lack of FPU implies that the CEVA-dsp cannot efficiently support floating-point applications [10]. CEVA-DSP compiler uses the mode of combining C code with assembly code. It mainly uses assembly intrinsic for SIMD operations [11]. The C6678 DSP is a high-performance fixed/floating-point DSP based on TI’s Keystone multi-core DSP architecture C66x. The C66x Digital Signal Processor (DSP) extends the performance of the C64x+ and C674x DSPs through enhancements and new features. Many of the new features target increased performance for vector processing [12]. TI’s C66x compiler also uses the mode of combining C code with assembly code. It supports SIMD by using the mode of interfacing C and C++ with assembly language [13]. Writing assembly code is difficult and time consuming. The assembly programmer has to handle time consuming machine-level issues such as registers allocation and instruction scheduling. The work on vectorization must be done manually by assembly programmers. It will be more convenient if these issues can be taken care of by the compiler [11]. If compiler supports high-level vectorized language, programmers can utilize architecture characteristics by using high-level language. SDR has a large amount of frequently changing radio communication algorithm [3], the high-level language developing environment for SDR-DSP is urgent. The architecture of SDR-DSP requires more complicated compiling techniques to develop data parallelism and instruction parallelism. SDR-DSP has special VU to support SIMD. Making good use of the SIMD architecture characteristics of SDR-DSP is critical to the performance. So the support to vectorization of SDR-DSP compiler is very important. Today the techniques of autovectorization of compilers are not mature. Saeed Maleki et al. evaluate the vectorization capabilities of today’s most popular compilers [14]: GCC (version 4.7.0), ICC (12.0) and XLC (11.1). They use different benchmarks which include a set of synthetic benchmarks, two applications from PACT and the Media Bench applications. The results of the evaluation show today’s compilers can at most vectorize 45–71% of the loops in the synthetic benchmark and only 18–30% in the collection of application. Today’s popular compilers are not effective in autovectorization and it is difficult in compiling field. It needs long time and large amount of research to find scientific optimized ways to solve this problem. SDR-DSP is designed for wireless communication and the requirement to develop high performance applications is urgent. It is essential for us to find a new scheme of vectorizing compiling for SDR-DSP. This paper gives a Language-Extension-based Vectorizing Compiling Scheme for SDR-DSP. For convenience, we call it LEVCS. We design a C-extending programming language called SDR-DSP C and develop a vectorizing compiler to support SDR-DSP C. LEVCS can support SDR-DSP C and flexible data reorganization. In this paper, Sect. 2 describes the architecture of SDR-DSP. Section 3 introduces LEVCS, including language extending for vectorization and Data reorganization for vectorization. Section 4 gives the results of the experiment and performance analysis. 2 Architecture of SDR-DSP As shown in Fig. 1, SDR-DSP consists of two processing units: SU (scalarunit) and VU (vectorunit). SDR-DSP can issue ten instructions per clock cycle. It supports instruction-level parallelism based on VLIW and data-level parallelism based on SIMD. - SDR-DSP includes unified instruction-fetch unit and instruction-dispatch unit. The dispatch unit issues instructions for SU and VU simultaneously. - SU performs scalar tasks and controls the flow of the execution of VU. VU performs computation-intensive parallel tasks. - SDR-DSP has a vector memory (VM) to store vector data. Vector Data Accessing Unit is used to load and store vector data. It supports efficient data supply and transport for wide vector computation. - VU includes a set of isomorphic VEs and the number of VEs is configurable. Each VE has local register file, accumulators and parallel functional units (MAC, ALU and BP). The parallel functional units support fixed-point and floating-point operations. ![Fig. 1. Architecture of SDR-DSP](image) A lot of applications need to reorganize data within VEs in VU. To support shuffle operation, SDR-DSP has data-shuffling unit to exchange data among different VEs. There is a special shuffle-modes memory which is separate from vector memory (VM). It contains various shuffle modes. Shuffle operation permutes data among local registers in various VEs by byte, half word or word. The shuffle operation among VEs can make data exchange more efficient. 3 Introduction to LEVCS Because of the SIMD characteristics of SDR-DSP architecture and the data-intensive characteristics of SDR applications, the efficient vectorizing compiler of SDR-DSP is very important. Language Extending for Vectorization. For some C programs, SDR-DSP compiler uses autovectorization to analyze the parallel parts in the programs. Then the parallel parts which satisfy the conditions to be vectorized will be recognized and transformed to the vectorized code running on VU. Because of the limitation of autovectorization we have described in Sect. 1, lots of complex C programs cannot be vectorized automatically. Some programs include complex loops which are very difficult to be analyzed and vectorized. Some programs benefit from complex SDR-DSP instructions, such as saturation arithmetic, reduction, shuffle and soon. These instructions cannot be mapped easily from high level language [5] and autovectorization for them is more difficult. To compensate the lack of the autovectorization of compiler, we put forward LEVCS for SDR-DSP. In LEVCS, we design and implement a C-extending programming language for SDR-DSP which is called SDR-DSP C. SDR-DSP C is designed according to the instruction set and architecture of SDR-DSP. It provides support to vectorization on SDR-DSP and programmers can use SDR-DSP C to write vectorized programs conveniently. SDR-DSP C extends standard C language with some pragmas and intrinsics. SDR-DSP C extends standard C language with vector data types and vector instructions for SDR-DSP. The main vector data types are shown in Table 1. <table> <thead> <tr> <th>Data types</th> <th>Machine mode</th> <th>Signification</th> </tr> </thead> <tbody> <tr> <td>vec double</td> <td>V8DI</td> <td>A vector consisted of 8 doubles</td> </tr> <tr> <td>vec float</td> <td>V16SF</td> <td>A vector consisted of 16 floats</td> </tr> <tr> <td>vec int</td> <td>V16SI</td> <td>A vector consisted of 16 ints</td> </tr> <tr> <td>vec short</td> <td>V32HI</td> <td>A vector consisted of 32 shorts</td> </tr> <tr> <td>vec char</td> <td>V64QI</td> <td>A vector consisted of 64 chars</td> </tr> </tbody> </table> SDR-DSP has some complex vector instructions, such as multi-mode shuffle, multi-width reduction, complex multiplication and soon. Using these instructions effectively can greatly improve applications’ performance. So we design new syntax corresponding to all of these instructions in SDR-DSP C. For example, to implement shuffle instructions, we add v_vshufw, v_vshuff, v_vshufh and v_vshufb in SDR-DSP C, to implement reduction instructions, we add v_reduc2, v_reduc4, v_reduc8 and v_reduc16 in SDR-DSP C. SDR-DSP C adds several pragmas to direct compiling optimization, such as: #pragma vect, #pragma novect, #pragma unroll(n). In LEVCS, we develop a high-level language developing environment for SDR-DSP. It includes a vectorizing compiler which support SDR-DSP C. It also includes assembler, linker, debugger and simulator. Programmers can use SDR-DSP C to develop vectrized programs for SDR-DSP and use SDR-DSP compiler to parse and translate these programs into intermediate language and then output the optimized assemble code. Then the code can be assembled and linked. It supplies a friendly and convenient environment for programmers to develop applications for SDR-DSP. **Data Reorganization for Vectorization.** The data-level parallelism in the applications of wireless communication, video and image processing always include regular data-level parallelism and irregular data-level parallelism. An application with regular data access can be considered to have regular data-level parallelism. If it has irregular data access and complex control flow, such as data-dependent control flow, the data-level parallelism is considered to be irregular. For regular data-level parallelism, vectorization on SIMD architecture always can get good effect. But for irregular data-level parallelism, the result of vectorization is always not satisfied. In such cases, the performance cannot get improved, and sometimes may be reduced. When the SIMD width is more wide, this problem becomes more serious. So based on maintaining the high efficiency of regular data-level parallelism, the supporting to irregular data-level parallelism is very important. Flexible and efficient data reorganization is essential. Traditional SIMD architecture uses guarded instructions to support data-dependent control flow [16]. This method uses masks to disable some SIMD lanes. But for complex branches, this method will waste computing resources and becomes inefficient. Woop [17] uses the scheme of branch fusion. SIMD lanes on different branch paths are executed sequentially and are synchronized in the branch joint. The efficient utilization of SIMD lanes are still not good. Maven VT microarchitecture [18] uses a unique lane buffering equipment. When meeting branch, the simd lanes on various branch paths will be buffered and will be executed sequentially. In the buffering equipment, SIMD lanes with the same executing paths can be merged to improve the utilization of SIMD lanes. In this method, extra hardware is needed and the hardware cost increases. Vector Thread Architecture [19] configures an instruction cache for each SIMD lanes. Each simd lanes can fetch instruction independently (threadfetch). This method can support data-dependent control flow effectively but the expansibility are not good. Each SIMD lanes need an instruction buffer, the hardware cost will be into lerable when SIMD width becomes more and more wide. Dynamic Wrap [20, 21] can support branches efficiently in GPUs. It reorganizes the threads executing various branch paths in multiple wraps in to newwraps, each wrap includes threads executing the same branch path. This methods needs to allocate registers for each wrap to implement the reorganization of abundant wraps and it brings heavy register costs. Instruction shuffle scheme is a new mechanism that can handle control-flow efficiently [22]. It stores instructions on various branch paths into a unified instruction buffer array. Instruction shuffle unit issues corresponding instructions to the SIMD lanes for various branch paths. This mechanism can implement parallel execution of various branch paths, But extra instructions are needed to support instruction shuffle and the hardware cost also increased. The reorder of the output data is still a problem. Conditional Stream [23] supports irregular data-level parallelism on IMAGINE processor. It classified the data streams and put the data of the same operations together. The original kernel with control flow is departed into multiple kernels without control flow. This method destroys the original order of data. Many applications in communication, video and image processing are data-order dependent so there covery of data order are more complex. Most applications of wireless communication, video and image processing are written in high-level language and the source codes in these applications are always complex and flexible. Efficient High-level language compiling is essential. It is very difficult and time consuming to do data reorganization manually. It will be more convenient if these issues can be solved by the compiler. If compiler supports data reorganization, programmers can focus on the algorithms by using high-level language. Data reorganization is critical to vectorization, so it is essential for us to implement flexible and efficient data reorganization in the compiler for wide SIMD Architecture. Implementing flexible and efficient data reorganization in the compiler of wide SIMD architecture is essential. LEVCS can do data reorganization for wide SIMD architecture. It implements flexible data reorganization for wide SIMD. It mainly has three modules to implement data reorganization for various requirements: Data reorganization based on Multi-Modulo, Data reorganization for wide vector filling and Data reorganization for Branch. Many algorithms in wireless communication require complex data exchange, such as FFT [15], FIR, IIR, Hartley transform, Discrete Cosine Transform, Viterbi Decoding and soon. In such algorithms, there are many irregular access to vectors which is a problem to performance. There alparts and imaginary parts of complex numbers are always stored continuously. But in some algorithms, the real parts and imaginary parts always participate in different vector computing, or are different operators of one vector computing. In order to utilize the SIMD architecture efficiently, data reorganization is required. To implement efficient vector data exchange, LEVCS supports data reorganization based on multi-modulo. According to the requirement of the algorithms, when data is loaded from VM to VR (vector register) or is stored from VR into VM, data need to be shuffled based on various modulos. The multiple modulos are designed to direct data reorganization. For each kind of modulo, there is a corresponding item in SMT. The data are loaded from VM into VRs which will participate in vector operations. In some applications, the data loaded are not long enough to fulfill the vector. In some applications, the data loaded into 16 VEs as a vector includes data for various vector operation or various sources vectors of one vector operation. In these cases, if the data are not reorganized, these data cannot be operated in parallel, some VEs will be idle when the vector operation is executed. The wide simd architecture cannot be fully utilized. LEVCS implements Data reorganization for wide vector filling using inner reorganization, horizontal reorganization and vertical reorganization. LEVCS identifies loops that can be simdized. Firstly, inner reorganization is used. Inner reorganization is used for one vector operation in inner loops. If the effective vector length is less than simdwidth, some VEs will be idle without data. In such case, short vectors will be combined into wide vector to let more VEs to work in parallel. If vector length still cannot fulfill the requirements of simd width, the loops will be unrolled. After loop unrolling, more data will be reorganized to do vector filling. The compiler should find enough parallel computation and the stride of various short vectors needs to be appropriate. If the data stride is too big, the cost of loading these data will be too higher to be accepted. If inner reorganization still cannot fulfill the requirements, horizontal reorganization is used. Horizontal reorganization is for multiple irrelevant vector operations in inner loops. Suppose a vector includes several groups of elements. Various groups contain various sources for various vector operations and each group is not wide enough for the vector width. If such vectors are not reorganized, while one group of elements participate in one vector operation, the VEs corresponding to the other groups of elements will be idle. In such case, LEVCS does horizontal reorganization. The horizontal reorganization combines various groups of multiple vectors together to form new vectors. If data reorganization of inner loops cannot fulfill the requirements, LEVCS will take outer loops into consideration and do vertical reorganization. Vertical reorganization is for multiple irrelevant vector operations in various layers of loops. Multiple irrelevant vector operations are reorganized together to form wide vectors. LEVCS will continue to unroll outer loops to get enough data for vectors when needed. For the wide SIMD architecture, the problems brought by branches become more serious. In order not to increase the cost of hardware, Data Reorganization for Branch solves this problem in compiling. LEVCS can do flexible data reorganization according to various cases of branches. All VEs in VU can work in parallel and need not to process various branch paths redundantly. The execution efficiency of loops with branches can be improved. 4 Results and Discussion In the experiment, we use LEVCS to vectorize FFT, FIR, IIR, dotprod and vecsum programs. The programs are compiled with the vetorizing compiler of SDR-DSP. After being assembled and linked, we get the executable program and run it on the cycle accurate simulator. We use $T_{SDR-DSP}$ to represent the cycle counts of the kernel in the program. We also execute floating-point FFT program on TMS320C66X simulator in CCS5.1. We can get same result with the result on SDR-DSP. We use optimization level -O3 to compile the C program [24] and get the cycle counts $T_{C66X}$. Comparing the two experiment results, we can get the speedup from Eq. (1) $$speedup = \frac{T_{C66X}}{T_{SDR-DSP}} \quad (1)$$ From that, we can see the vectorized program using LEVCS can get correct result and achieve higher performance than C program running on TI-DSP. The result of the experiment (Table 2) shows: developing vectorized program with SDR-DSP C and using the vectorizing compiler of SDR-DSP can make good use of the SIMD characteristics of SDR-DSP. We can achieve with TI-DSP. The experiment proves that LEVCS designed in this paper has validity and efficiency. Table 2. Experiment result <table> <thead> <tr> <th></th> <th>T_{SDR-DSP}</th> <th>T_{C66X}</th> <th>Speedup</th> </tr> </thead> <tbody> <tr> <td>FFT_float (1024)</td> <td>14608</td> <td>117940</td> <td>8.074</td> </tr> <tr> <td>FIR_float (1024 * 16)</td> <td>29493</td> <td>85019</td> <td>2.883</td> </tr> <tr> <td>IIR_float (1024)</td> <td>6159</td> <td>29722</td> <td>4.826</td> </tr> <tr> <td>vector_dotprod_float (1024)</td> <td>675</td> <td>4131</td> <td>6.828</td> </tr> <tr> <td>vector_sum_float (1024)</td> <td>526</td> <td>4131</td> <td>7.854</td> </tr> </tbody> </table> 5 Conclusion In order to explore the performance of digital signal processor SDR-DSP, this paper designs and implements LEVCS, a Language-Extension-based Vectorizing Compiling Scheme for SDR-DSP. This design provides a vectorized programming method with a new C-extending programming language named SDR-DSP C. The corresponding vectorizing compiler is developed for SDR-DSP, including the support for SDR-DSP C and flexible data reorganization. The result of experiment shows that we can implement vectorization on SDR-DSP correctly and can get good speedups by using LEVCS. In practice, LEVCS can be used to vectorize the computing kernels of the applications on SDR-DSP. References 8. Gardner, J.S.: CEVA exposes DSP six pack XC4000 family uses coprocessors to buff up the baseband. The Linley Group, Microprocessor Report, March 2012 13. Texas Instruments: TMS320C6000 optimizing compiler v7.3 user’s guide. SPRU187T, July 2011 Computer Engineering and Technology 20th CCF Conference, NCCET 2016, Xi'an, China, August 10-12, 2016, Revised Selected Papers Xu, W.; Xiao, L.; Li, J.; Zhang, C.; Zhu, Z. (Eds.) 2016, X, 232 p. 133 illus., Softcover ISBN: 978-981-10-3158-8
{"Source-Url": "https://www.springer.com/cda/content/document/cda_downloaddocument/9789811031588-c2.pdf?SGWID=0-0-45-1597390-p180433947", "len_cl100k_base": 4950, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 19763, "total-output-tokens": 6689, "length": "2e12", "weborganizer": {"__label__adult": 0.0007181167602539062, "__label__art_design": 0.0005173683166503906, "__label__crime_law": 0.0005769729614257812, "__label__education_jobs": 0.0004038810729980469, "__label__entertainment": 0.00013184547424316406, "__label__fashion_beauty": 0.00029206275939941406, "__label__finance_business": 0.00032639503479003906, "__label__food_dining": 0.0005698204040527344, "__label__games": 0.000965595245361328, "__label__hardware": 0.0157012939453125, "__label__health": 0.0009164810180664062, "__label__history": 0.0004301071166992187, "__label__home_hobbies": 0.0001659393310546875, "__label__industrial": 0.0014934539794921875, "__label__literature": 0.00021278858184814453, "__label__politics": 0.0004665851593017578, "__label__religion": 0.0010232925415039062, "__label__science_tech": 0.1396484375, "__label__social_life": 8.046627044677734e-05, "__label__software": 0.00727081298828125, "__label__software_dev": 0.82568359375, "__label__sports_fitness": 0.0006427764892578125, "__label__transportation": 0.0014352798461914062, "__label__travel": 0.00031876564025878906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26295, 0.05373]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26295, 0.68491]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26295, 0.85803]], "google_gemma-3-12b-it_contains_pii": [[0, 2593, false], [2593, 6137, null], [6137, 7786, null], [7786, 10674, null], [10674, 14106, null], [14106, 17719, null], [17719, 20826, null], [20826, 23570, null], [23570, 26055, null], [26055, 26295, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2593, true], [2593, 6137, null], [6137, 7786, null], [7786, 10674, null], [10674, 14106, null], [14106, 17719, null], [17719, 20826, null], [20826, 23570, null], [23570, 26055, null], [26055, 26295, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26295, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26295, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26295, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26295, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26295, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26295, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26295, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26295, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26295, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26295, null]], "pdf_page_numbers": [[0, 2593, 1], [2593, 6137, 2], [6137, 7786, 3], [7786, 10674, 4], [10674, 14106, 5], [14106, 17719, 6], [17719, 20826, 7], [20826, 23570, 8], [23570, 26055, 9], [26055, 26295, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26295, 0.13462]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
b068726cc8832a988e27057bbbfe46e81707dd97
Using fuzzy analytical hierarchy process (AHP) to evaluate web development platform Ahmad Sarfaraz\textsuperscript{a,*}, Pooja Mukerjee\textsuperscript{a} and Kouroush Jenab\textsuperscript{b} \textsuperscript{a}Department of Manufacturing Systems Engineering and Management, California State University Northridge, USA \textsuperscript{b}Education chair, Society of Reliability Engineering-Ottawa, Canada, Visiting Professor \begin{abstract} With the increasing importance and the role that websites play in all aspects of life, more and more web development projects are being undertaken by companies. One of the key decisions in which both short-term and long-term success of the project depends is choosing the right development platform. Its criticality can be judged by the fact that once a platform is chosen, one has to live with it throughout the software development life cycle. The entire shape of the project depends on the language, operating system, tools, frameworks etc., in short the web development platform chosen. In addition, choosing the right platform is a multi criteria decision making (MCDM) problem. We propose a fuzzy analytical hierarchy process model to solve the MCDM problem. We try to tap the real-life modeling potential of fuzzy logic and conjugate it with the commonly used powerful AHP modeling method. \end{abstract} \section{1. Introduction} We are all in an era in which internet and websites are an indispensable part of life. Internet is, by all means, the top source of information and the most widely used channel of communication. Practically, all companies from small to large have their own websites. In this highly competitive environment where technology is growing fast, having a high-quality website is indispensable. Companies depend on the web for efficient business functioning, for purposes of advertising, sharing company related information, providing remote access to customers, etc. Other areas are online banking, remote patient management by doctors, university websites and many more. Additionally some businesses by themselves are based on the web. Hence, a number of projects for Web development are conducted every year. A web application can be broadly divided into two parts, the front-end and the back-end. Front-end comprises the graphical user interface of the website. The back-end comprises the server software that processes real-time requests from user, database server which stores data, content management... system and interface software. Web application development involves incorporating functionality for handling client information, requests, response, cookies, logging, error handling, session management, memory allocation, security, multimedia etc. The common challenge faced by organizations is to select the appropriate web development platform. In the context of web application development, term platform comprises the programming language along with reusable software like frameworks, libraries, tools and auxiliary languages. Auxiliary languages include HTML/DHTML, CSS, and JavaScript. This decision making needs to consider many criteria like security, performance, licensing cost, compatibility, hardware costs, administration etc. Lin et al. (2011) pointed out that though information technology acceptance is a mature research topic, analysis of acceptance of alternative products is highly neglected. Based on their study in context of web application platforms Microsoft and Java, they finally concluded that it is better to perform a relative study of alternative products than just rely on single product study. We will now throw some light on related works, which have been performed. Prechelt (2011) performed direct exploratory experiments to compare platforms Java EE, Perl and PHP. Chen and Ma (2004) stated Sun’s Java 2 Enterprise Edition (J2EE) – an open technology framework to be the most successful in the area of server-side development. Ramana and Prabhakar (2005) investigated the performance of LAMP architecture and devised a scale to measure architecture performance. Wu et al. (2008) proposed and implemented a very strong security module for LAMP application system. LAMP ranks high when comparing the security criteria of various development platforms. On the other hand, GU and TANG (2010) stated ASP.NET from Microsoft to be one of the most representative and widely used web development technologies. They analyzed and compared three frameworks provided by ASP.NET i.e. MVC, MVP and Web Forms, and concluded the framework chosen must be based on actual development needs. It is highly evident that decision to choose the right web development platform requires spending significant amount of time and would involve multi-criteria decision making. Analytic hierarchy process (AHP) model developed by Thomas Saaty is a method for selecting the best decision alternative after evaluating them against multiple criteria. Each decision alternative gets a final score depending upon how well it meets the criteria. Pair-wise comparison technique is used. The relative importance/ranking of the criteria against each other is also determined. Chen and Ma (2004) presented a literature review of the applications of AHP and presented numerous application areas where the AHP is used as a multiple criteria decision-making tool. In addition, Lee et al. (2008) used analytical hierarchy process AHP to compare various critical success factors for web-based development. Many Information System studies have used it to evaluate IT projects, rank alternatives, and make resource allocation decisions (Bodin et al., 2005). In AHP methodology, the decision matrix is constructed using integer 1-9 and it’s reciprocal. It does not consider cognitive factors of human’s judgment. Nakatani and Chuang (2011) used AHP to develop a web analytics tool selection method. Fuzzy logic was introduced by Zadeh (1965), which is basically a multi-valued logic accounting for smooth transition between members and non-members of a set as opposed to binary crisp sets. In case of decision-making, human judgment is mostly vague and uncertain rather than a precise mathematical value. Fuzzy logic systems better mimic and handle human thought. Hence fuzzy AHP approach can be used to overcome drawback of AHP i.e., lowered accuracy due to inability to handle vague human thoughts. In AHP methodology two alternatives/criteria are compared to each other and a numeric value is assigned to designate degree of relevance of one over another. However, this is not the most practical approach. In fuzzy AHP, the preference scale is represented by intervals, which overlap each other. This enhances the accuracy of the final judgment by taking into account human nature of imperfect precision. This methodology will be discussed in detail below. Numerous researches have tapped the capability of fuzzy logic and control system in conjunction to various decision making tools, in the area of web applications. Mohanty et al. (2010) evaluated 364 web services against 9 quality attributes using fuzzy multi-criteria decision making with back propagation trained neural network. They concluded that the ‘min’ operator and compensatory ‘and’ operator produced correct ranking results. This supports the effectiveness of fuzzy logic in comparative studies. Cheong and Way (2000) used fuzzy AHP approach to capture fuzziness and subjectiveness of prioritization of upgrading alternatives selection for a Web server system, which is a multi-criteria decision making problem. Caching, link bandwidth, redundant server and intelligent load balancing alternatives were evaluated to overcome problem of congested web traffic. Cao et al. (2006) took a step forward and developed a web-based method combining the concepts of AHP and fuzzy theory to assist decision making for partner selection in agile virtual enterprise. The result indicated that the proposed method not only solves the problem of inconsistency in making judgment matrix, but also helps better examining strength and weaknesses of alternatives and hence easily make a better decision. Liu et al. (2007) applied fuzzy AHP approach to evaluate e-commerce websites and concluded that fuzzy numbers are more preferable than crisp sets used in AHP and allows decision-makers more freedom of estimation. Yu et al. (2008) used Fuzzy AHP to evaluate three commerce search engines and stated that fuzzy AHP is more practical, efficient, accurate, easy to understand, persuasive and is of great value for enterprises in the web platform. As discussed previously, the decision to choose the right web development platform is difficult and complex and it is a multi-criteria decision. Hence, this paper presents a fuzzy analytical hierarchy process based approach to compare following three major web development platforms. - Linux/Apache/MySQL/PHP (LAMP) - Microsoft’s ASP.NET - Sun’s Java 2 Enterprise Edition (J2EE) These three platforms will be evaluated against four major criteria that is, Security, Compatibility, Performance and Licensing cost. Information security is an integral part of web development and must be taken care of throughout the project lifecycle. Information security is extremely critical as the web application must interact with distributed systems and other remote services. Web is very complex and hence security is highly vulnerable. Chen et al. (2008) also agreed that with the increasing use of web services, more and more enterprises have recognized security to be the critical issue in real business systems. With the upgrades of the platform, it is necessary that there is a backward compatibility with the older versions. This is to make sure that applications built in older version of language can also be run in the new upgraded environment. Also, if a company needs to be up-to-date with technology and take advantage of new features added in the upgraded version of platform, backward compatibility must be available. Zhong and Yang (2009) while presenting a collection of design techniques for building enterprise web services, stressed on a stable and compatible evolution of software. They proposed a versioning related technique to improve compatibility. Performance, which includes number of requests processed by the web application in a unit of time, the mean time between failures, speed of response to client etc is no doubt an important criterion while evaluating web development platforms. Licensing cost is the amount paid for acquiring the software (language, tools, framework etc). Since a number of people in an organization need to work with the platform, a company has to acquire multiple licenses. Hence, licensing cost is an important parameter that a Manager must take into account. Messerschmitt and Szyperski (2004) presented a paper on software planning and design, in which they enumerated both performance and cost of development (which includes licensing cost) among most recognized return-on-investment drivers. There are many existing works in single web development environment study. But analysis of acceptance of alternative products is highly neglected. Some comparative study between ‘frameworks’ in a single environment is done as stated above. However, selecting the right environment itself among the web development platforms i.e., LAMP, J2EE and ASP.NET is a key decision faced by many Managers these days. An effective management tool for decision making is needed since finding the right platform is based on various criteria like security, cost, performance, compatibility etc. The prioritization of these criteria affecting the right choice varies with each institution. Hence, existing exploratory experiments to compare technologies do not suffice. Also, the exploratory experiments to compare programming languages are very limited. In this paper, we propose a strong management tool fuzzy AHP to accomplish comparative study of alternative web technologies. The technique to compare the widely accepted web development platforms are discussed, which can also be extended by the Managers to other platforms in the future and base the study on additional criteria. This is taking into consideration the rapid growth and technological advancement faced by the IT industry. 2. Fuzzy AHP methodology 2.1. Extent analysis method on fuzzy AHP Chang’s extent analysis method on fuzzy AHP can be summarized as below. Firstly, triangular fuzzy numbers are used to compare alternatives. The approach below is implemented for pair wise comparison to prioritize criteria as well as comparing alternatives against each criterion. Each set represents a description of level of preference of one alternative over another. As you see in the figure below, the membership functions are such that the sets overlap each other. Each value of x€R has a degree of membership to two different sets. The membership function for a triangular fuzzy number M= (l, m, u) is defined as follows, where l and u stand for lower and upper value of the support of M respectively and m is the modal value. \[ \mu_M(x) = \begin{cases} \frac{x}{m - l}, & x \in [l, m] \\ \frac{x}{m - u}, & x \in [m, u] \\ 0, & \text{otherwise} \end{cases} \] The necessary operations defined on two fuzzy numbers M1= (l1, m1, u1) and M2= (l2, m2, u2) are \[ (l_1, m_1, u_1) \oplus (l_2, m_2, u_2) \approx (l_1 + l_2, m_1 + m_2, u_1 + u_2) \\ (l_1, m_1, u_1) \odot (l_2, m_2, u_2) \approx (l_1 l_2 + m_1 m_2 + m_1 m_2) \] \((\lambda, \lambda, \lambda) \odot (l_1, m_1, u_1) = (\lambda l_1, \lambda m_1, \lambda u_1) \quad \lambda > 0, \lambda \in \mathbb{R} \quad (4)\) \((l_i, m_i, u_i)^{-1} \approx (1/u_i, 1/m_i, 1/l_i) \quad (5)\) Let \(X = \{x_1, x_2, \ldots, x_n\}\) be an object set, and \(U = \{u_1, u_2, \ldots, u_m\}\) be a goal set. As per Chang’s extent analysis method, each object is taken and extent analysis for each goal is performed, respectively. Therefore, \(m\) extent analysis values for each object can be obtained and shown as follows: \[M^1_{gi}, M^2_{gi}, \ldots, M^m_{gi}, i = 1, 2, \ldots, n \quad (6)\] where all \(M^j_g\) \((j = 1, 2, \ldots, m)\) are triangular fuzzy numbers. Fuzzy synthetic extent with respect to \(i\)-th object is defined as \[S_i = \sum_{j=1}^{m} M^j_{gi} \odot [\sum_{i=1}^{n} \sum_{j=1}^{m} M^j_{gi}]^{-1} \quad (7)\] For each level of AHP, fuzzy number is used for pair-wise comparison. Matrix \(A\) is constructed for pair-wise comparisons where, \[A = (a_{ij})_{nxm} \quad (8)\] If \((l, m, u)\) is the importance of element \(i\) over \(j\), then the importance of element \(j\) over \(i\) will be \((l, m, u)^{-1}\). Once the synthetic extent is determined, the degree of possibility of one fuzzy number/synthetic value obtained to be greater than other is determined as follows:- \[V(M_1 \geq M_2) = \sup_{x \geq y} \left[ \min(\mu_{M_1}(x), \mu_{M_2}(y)) \right] \quad (9)\] \[V(M_1 \geq M_2) = 1 \quad \text{iff} \quad m_1 \geq m_2, \quad (10)\] \[V(M_2 \geq M_1) = hgt(M_1 \cap M_2) = \mu_{M_1}(d) \quad (11)\] Chang further added, the degree of possibility for \(i^{th}\) fuzzy value to be greater than all others is as follows and is performed for synthetic extent obtained in previous step. \[V(M \geq M_1, M_2, \ldots, M_k) = V[(M \geq M_1) \text{and}(M \geq M_2) \text{and} \ldots \ldots \text{and}(M \geq M_k)] \quad (12)\] Let \[d'(A_i) = \min V(S_i \geq S_k) \quad (13)\] Hence the weight vector is given by \[W' = (d'(A_1), d'(A_2), \ldots, d'(A_n))^T, \quad (14)\] where \(A_i (i = 1, 2, \ldots, n)\) are \(n\) elements. After normalization, the final weight vector of criteria/alternatives is \[W = (d(A_1), d(A_2), \ldots, d(A_n))^T \quad (15)\] The consistency index (CI) and consistency ratio (CR) are defined as \[ CI = \frac{\lambda_{\text{max}} - n}{n - 1} \] \[ CR = \frac{CI}{RI} \] where \( \lambda_{\text{max}} \) is the largest eigenvalue, \( n \) is the number of items being compared in the matrix, and RI is random index. 2.2. Application of fuzzy AHP to web development platform comparison The goal of fuzzy AHP is to compare and select from three major platforms of web development. At the second level of hierarchy, we determine how the four criteria i.e., Security (C_1), Compatibility (C_2), Performance (C_3) and Licensing cost (C_4) contribute towards this objective. At the level of problem hierarchy, we determine how each alternative (Linux/Apache/MySQL/PHP (LAMP) (A_1), Microsoft’s ASP.NET (A_2) and Sun’s Java 2 Enterprise Edition (J2EE) (A_3) evaluate against each criteria. Preference is determined using the triangular fuzzy scale. This guarantees less chances of error as compared to AHP. Following is the fuzzy AHP model solution. Preference scale: The graph on the next page is the triangular fuzzy membership function. The linguistic scale for importance is the various fuzzy sets. ![Fig. 2. Linguistic scale of triangular numbers for relative importance](image) **Table 1** <table> <thead> <tr> <th>Linguistic scale for importance</th> <th>Triangular fuzzy scale</th> <th>Triangular fuzzy Reciprocal scale</th> </tr> </thead> <tbody> <tr> <td>Equally important (EI)</td> <td>( \left( \frac{1}{2}, 1, \frac{3}{2} \right) )</td> <td>( \left( \frac{2}{3}, 1, \frac{1}{2} \right) )</td> </tr> <tr> <td>Weakly more important (WMI)</td> <td>( \left( 1, \frac{3}{2}, 2 \right) )</td> <td>( \left( \frac{1}{2}, \frac{2}{3}, 1 \right) )</td> </tr> <tr> <td>Strongly more important (SMI)</td> <td>( \left( \frac{3}{2}, 2, \frac{5}{2} \right) )</td> <td>( \left( \frac{2}{5}, \frac{1}{2}, \frac{2}{3} \right) )</td> </tr> <tr> <td>Very strongly more important (VSMI)</td> <td>( \left( 2, \frac{5}{2}, 3 \right) )</td> <td>( \left( \frac{1}{3}, \frac{2}{5}, \frac{1}{2} \right) )</td> </tr> <tr> <td>Absolutely more important (AMI)</td> <td>( \left( \frac{5}{2}, 3, \frac{7}{2} \right) )</td> <td>( \left( \frac{2}{7}, \frac{1}{3}, \frac{2}{5} \right) )</td> </tr> </tbody> </table> Evaluation of criteria weights: The pair wise comparison decision matrix is in Table 2. <table> <thead> <tr> <th>Table 2</th> <th>Fuzzy evaluation matrix with respect to the goal</th> </tr> </thead> <tbody> <tr> <td></td> <td>Security (C1)</td> </tr> <tr> <td>Security (C1)</td> <td>(1,1,1)</td> </tr> <tr> <td>Compatibility(C2)</td> <td>(2/5,1/2,2/3)</td> </tr> <tr> <td>Performance(C3)</td> <td>(1/2,2/3,1)</td> </tr> <tr> <td>Licensing cost(C4)</td> <td>(1/3,2/5,1/2)</td> </tr> </tbody> </table> Calculating fuzzy synthetic using Eq. (7) is as follows: \[ S_1 = (0.24, 0.38, 0.59), S_2 = (0.10, 0.17, 0.29), S_3 = (0.17, 0.28, 0.45), S_4 = (0.10, 0.16, 0.29) \] Using Eq. (11), we have: \[ V(S_1 \geq S_2) = 1.00, V(S_1 \geq S_3) = 1.00, V(S_1 \geq S_4) = 0.21, V(S_2 \geq S_3) = 0.52, V(S_2 \geq S_4) = 1.00, V(S_3 \geq S_2) = 0.69, V(S_3 \geq S_4) = 1.00, V(S_4 \geq S_1) = 0.20, V(S_4 \geq S_2) = 0.93, V(S_4 \geq S_3) = 0.49 \] Using Eq. (13), \(d'(C_1) = 1\), \(d'(C_2) = 0.21\), \(d'(C_3) = 0.69\), \(d'(C_4) = 0.20\) After normalization, \(W = (0.48, 0.1, 0.33, 0.09)^T\) \hspace{1cm} (18) Hence, the weight of each criterion is in Table 3. <table> <thead> <tr> <th>Table 3</th> <th>Weight of each criterion</th> </tr> </thead> <tbody> <tr> <td></td> <td>Security (C1)</td> </tr> <tr> <td></td> <td>0.48</td> </tr> </tbody> </table> Evaluation of alternatives with respect to each criterion: For Criteria Security C1, refer to Table 4. <table> <thead> <tr> <th>Table 4</th> <th>Evaluation with respect to ‘Security’ criteria</th> </tr> </thead> <tbody> <tr> <td></td> <td>Security (C1)</td> </tr> <tr> <td></td> <td>(1,1,1)</td> </tr> <tr> <td>LAMP(A1)</td> <td>(2/5,1/2,2/3)</td> </tr> <tr> <td>ASP.NET(A2)</td> <td>(1/2,2/3,1)</td> </tr> <tr> <td>J2EE(A3)</td> <td>(1/3,2/5,1/2)</td> </tr> </tbody> </table> Calculating fuzzy synthetic using Eq. (7) is as follows: \[ S_1 = (0.30, 0.47, 0.71), S_2 = (0.15, 0.21, 0.30), S_3 = (0.21, 0.33, 0.51). \] Using Eq. (11), \[ V(S_1 \geq S_2) = 1.00, V(S_1 \geq S_3) = 1.00, V(S_2 \geq S_3) = 0.01, V(S_2 \geq S_1) = 0.42, V(S_3 \geq S_1) = 0.61, V(S_3 \geq S_2) = 1.00 \] Using Eq. (13), \(d'(A_1) = 1.00, d'(A_2) = 0.01, d'(A_3) = 0.61\) and after normalization, \[ W = (0.62, 0.006, 0.376)^T \hspace{1cm} (19) \] For Criteria Compatibility C2, refer to Table 5. Table 5 Evaluation with respect to ‘Compatibility’ criteria <table> <thead> <tr> <th>Compatibility(C2)</th> <th>LAMP(A1)</th> <th>ASP.NET(A2)</th> <th>J2EE(A3)</th> </tr> </thead> <tbody> <tr> <td>LAMP(A1)</td> <td>(1,1,1)</td> <td>(1,3/2,2)</td> <td>(3/2, 2, 5/2)</td> </tr> <tr> <td>ASP.NET(A2)</td> <td>(1/2,2/3,1)</td> <td>(1,1,1)</td> <td>(1,3/2,2)</td> </tr> <tr> <td>J2EE(A3)</td> <td>(2/5,1/2,2/3)</td> <td>(1/2,2/3,1)</td> <td>(1,1,1)</td> </tr> </tbody> </table> Calculating fuzzy synthetic using Eq. (7) is as follows: \[ S_1 = (0.29, 0.46, 0.70), S_2 = (0.21, 0.32, 0.51), S_3 = (0.16, 0.22, 0.34) \]. Using Eq. (11), \[ V(S_1 \geq S_2) = 1.00, V(S_1 \geq S_3) = 1.00, V(S_2 \geq S_1) = 0.62, V(S_2 \geq S_3) = 1.00, V(S_3 \geq S_1) = 0.17, V(S_3 \geq S_2) = 0.56 \] Using Eq. (13), \( d'(A_1) = 1.00, d'(A_2) = 0.62, d'(A_3) = 0.17 \). After normalization, \[ W = (0.56, 0.35, 0.09)^T \] \hspace{1cm} (20) For Criteria Performance (C3), refer to Table 6. Table 6 Evaluation with respect to ‘Performance’ criteria <table> <thead> <tr> <th>Performance(C3)</th> <th>LAMP(A1)</th> <th>ASP.NET(A2)</th> <th>J2EE(A3)</th> </tr> </thead> <tbody> <tr> <td>LAMP(A1)</td> <td>(1,1,1)</td> <td>(3/2,2,5/2)</td> <td>(3/2, 2, 5/2)</td> </tr> <tr> <td>ASP.NET(A2)</td> <td>(1/2,2/3,1)</td> <td>(1,1,1)</td> <td>(1,3/2,2)</td> </tr> <tr> <td>J2EE(A3)</td> <td>(2/5,1/2,2/3)</td> <td>(1/2,2/3,1)</td> <td>(1,1,1)</td> </tr> </tbody> </table> Calculating fuzzy synthetic using Eq. (7) is as follows: \[ S_1 = (0.31, 0.50, 0.75), S_2 = (0.15, 0.25, 0.40), S_3 = (0.16, 0.25, 0.46) \]. Using Eq. (11), \[ V(S_1 \geq S_2) = 1.00, V(S_1 \geq S_3) = 1.00, V(S_2 \geq S_1) = 0.26, V(S_2 \geq S_3) = 1.00, V(S_3 \geq S_1) = 0.37, V(S_3 \geq S_2) = 1.00 \] Using Eq. (13), \( d'(A_1) = 1.00, d'(A_2) = 0.26, d'(A_3) = 0.37 \) and after normalization, \[ W = (0.61, 0.16, 0.23)^T \] \hspace{1cm} (21) For Criteria licensing cost (C4), refer to Table 7. Table 7 Evaluation with respect to ‘Licensing cost’ criteria <table> <thead> <tr> <th>Licensing cost(C4)</th> <th>LAMP(A1)</th> <th>ASP.NET(A2)</th> <th>J2EE(A3)</th> </tr> </thead> <tbody> <tr> <td>LAMP(A1)</td> <td>(1,1,1)</td> <td>(3/2,2,5/2)</td> <td>(1/2, 1, 3/2)</td> </tr> <tr> <td>ASP.NET(A2)</td> <td>(1/2,2/3,1)</td> <td>(1,1,1)</td> <td>(2/5,1/2,2/3)</td> </tr> <tr> <td>J2EE(A3)</td> <td>(2/5,1/2,2/3)</td> <td>(3/2,2,5/2)</td> <td>(1,1,1)</td> </tr> </tbody> </table> Calculating fuzzy synthetic using Eq. (7) is as follows: \[ S_1 = (0.23, 0.40, 0.63), S_2 = (0.14, 0.20, 0.29), S_3 = (0.25, 0.40, 0.69) \]. Using Eq. (11), \[ V(S_1 \geq S_2) = 1.00, V(S_1 \geq S_3) = 1.00, V(S_2 \geq S_1) = 0.23, V(S_2 \geq S_3) = 0.19, V(S_3 \geq S_1) = 1.00, V(S_3 \geq S_2) = 1.00 \] Using Eq. (13), \( d'(A_1) = 1.00, d'(A_2) = 0.19, d'(A_3) = 1.00 \). After normalization, \[ W = (0.46, 0.09, 0.46)^T \] \hspace{1cm} (22) The final weight for each alternative is obtained by multiplying the criteria weight matrix with the matrix obtained by calculating weights for each alternative when evaluated with respect to each criterion. Using Eqs. (18-22) we get final weight matrix as summarized in Table 8. Table 8 Final decision using fuzzy AHP <table> <thead> <tr> <th>Platform</th> <th>Score</th> </tr> </thead> <tbody> <tr> <td>Linux/Apache/MySQL/PHP (LAMP)</td> <td>0.596</td> </tr> <tr> <td>Microsoft’s ASP.NET</td> <td>0.098</td> </tr> <tr> <td>Sun’s Java 2 Enterprise Edition (J2EE)</td> <td>0.307</td> </tr> </tbody> </table> 3. Discussion and analysis Based on the scores developed in this paper, web development platform chosen is LAMP. This model just takes into account four criteria for choosing the best platform. However, more criteria can be added based on the company and its requirements. In addition, more alternatives can be evaluated, since for each programming language mentioned above, there are multiple frameworks available. A framework consists of a set of reusable functions that can be used in web development. More criteria that a company might want to evaluate based on industry are availability of frameworks, availability of components, ease of configuration, quality of staffing required, language supporting existing database etc. The pair wise comparison values might also vary from the model presented above based on company situation and policies. For example, a startup company may assign more importance to performance as compared to compatibility. On the other hand an established firm which just releases higher version of software might give more importance to backward compatibility. Hence, the above model forms a basis that can be used by companies to decide the best platform that may be used for developing their web application. 4. Conclusion Fuzzy AHP technique has been successfully used to solve a multi-criteria decision making objective of finding the right platform for web application development. An effective management tool has been devised to address this decision making problem. Previous works generally focused on single platform study or exploratory experiments. With the enormous numbers of web development projects undertaken each year, managers needed an accurate and reliable tool to determine the right development platform based on factors related to company culture, financials and priorities. Additionally, making decision based on just the widely used AHP model is prone to error, since it does not handle vagueness and uncertainty in human judgment. Fuzzy approach takes this into account using linguistic scales which overlap and with the concept of membership function, provides a more stable MCDM model. Also, linguistic scale as opposed to definitive preference values in AHP model gives better confidence to participants developing pair wise comparison matrix. Hence, using extent analysis method, fuzzy AHP model has been devised to address real-life critical decision of determining the right web-based development platform. This paper shows complete illustration of implementing Fuzzy AHP in this context, which can help one adapt to the methodology with ease. In this illustration, based on overall score, criteria ‘Security’ contributes most to the final goal of successful web development effort, followed by ‘Performance’, ‘Compatibility’ and ‘Licensing cost’. Linux/Apache/MySQL/PHP (LAMP) platform with Linux operating system, Apache Server, MySQL database query language and PHP scripting language is found to be the best platform for web application development. Furthermore, this technique can be extended by the Managers to comparatively evaluate other platforms in the future and base their study on additional criteria as elaborated in section ‘Discussion and analysis’. This takes into consideration the rapid growth and technological changes faced by the IT industry. References
{"Source-Url": "http://www.growingscience.com/msl/Vol2/msl_2011_67.pdf", "len_cl100k_base": 7749, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 26706, "total-output-tokens": 9573, "length": "2e12", "weborganizer": {"__label__adult": 0.0002982616424560547, "__label__art_design": 0.00037479400634765625, "__label__crime_law": 0.0003063678741455078, "__label__education_jobs": 0.0013761520385742188, "__label__entertainment": 5.8591365814208984e-05, "__label__fashion_beauty": 0.00013375282287597656, "__label__finance_business": 0.0011835098266601562, "__label__food_dining": 0.0003058910369873047, "__label__games": 0.0004038810729980469, "__label__hardware": 0.0005846023559570312, "__label__health": 0.0004968643188476562, "__label__history": 0.00016033649444580078, "__label__home_hobbies": 7.528066635131836e-05, "__label__industrial": 0.0003609657287597656, "__label__literature": 0.00021016597747802737, "__label__politics": 0.00022912025451660156, "__label__religion": 0.00029850006103515625, "__label__science_tech": 0.01120758056640625, "__label__social_life": 7.069110870361328e-05, "__label__software": 0.007404327392578125, "__label__software_dev": 0.9736328125, "__label__sports_fitness": 0.00018453598022460935, "__label__transportation": 0.00037384033203125, "__label__travel": 0.00017309188842773438}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30790, 0.07218]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30790, 0.59655]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30790, 0.84745]], "google_gemma-3-12b-it_contains_pii": [[0, 2491, false], [2491, 7041, null], [7041, 11296, null], [11296, 13508, null], [13508, 15729, null], [15729, 17898, null], [17898, 20405, null], [20405, 23271, null], [23271, 27209, null], [27209, 30790, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2491, true], [2491, 7041, null], [7041, 11296, null], [11296, 13508, null], [13508, 15729, null], [15729, 17898, null], [17898, 20405, null], [20405, 23271, null], [23271, 27209, null], [27209, 30790, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30790, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30790, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30790, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30790, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30790, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30790, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30790, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30790, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30790, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30790, null]], "pdf_page_numbers": [[0, 2491, 1], [2491, 7041, 2], [7041, 11296, 3], [11296, 13508, 4], [13508, 15729, 5], [15729, 17898, 6], [17898, 20405, 7], [20405, 23271, 8], [23271, 27209, 9], [27209, 30790, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30790, 0.22167]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
26d67b8077bd4cf74288d2c71d350796f5072ad1
A New Approach for Software Requirements Elicitation Prasad Rajagopal¹, Roger Lee¹, Thomas Ahlswede¹, Chia-Chu Chiang², Dale Karolak³ ¹ Department of Computer Science, Central Michigan University, U.S.A. {lee, ahlswede}@cps.cmich.edu ² Department of Computer Science, University of Arkansas- Little Rock, U.S.A. cxchiang@ualr.edu ³ Intier Automotive Closures, U.S.A. Dale_karolak@yahoo.com Abstract Requirements elicitation is both the hardest and most critical part of software development, since errors at this beginning stage propagate through the development process and are the hardest to repair later. This paper proposes an improved process for requirements elicitation. The key improvements are: (1) to train the non-technical stakeholders (primarily the users) in the capabilities and limitations of computer hardware, software, and of software developers; (2) identify keywords while interviewing the stakeholders, visually as well as in text form; (3) use keyword mapping to generate candidate system requirements; (4) apply the techniques of Quality Function Deployment (QFD) and the Capability Maturity Model (CMM) during the elicitation process. 1. Introduction This paper proposes an improved process for software requirements elicitation. The hardest single part of building a software system is deciding what to build. No other part of the work so cripples the resulting system if done wrong. No other part is more difficult to rectify later [7]. Therefore requirements elicitation, the first phase of the software development process, is arguably the most critical. Software requirement has been defined by IEEE [12] as, (1) a condition or capability needed by a user to solve a problem or achieve an objective; (2) a condition or capability that must be met or possessed by a system or system component to satisfy a contract, standard, specification, or other formally imposed document; (3) a documented representation of a condition or capability as in (1) or (2). Requirements are not limited to the functionality of the system, as often supposed, but include other aspects. Different authors have presented different definitions, but there are clearly nonfunctional requirements as well as functional ones. Davis [6] classifies requirements as: - Functional requirements - Nonfunctional requirements - Performance/reliability - Interfaces - Design constraints 2. Problems in Requirements Elicitation Errors in requirements elicitation are, overall, the most serious in software development, and the hardest to repair. Studies by Beichter [1] (Figure 1) indicate that Figure 1. Breakdown of system errors (Beichter) 70% of the systems errors are due to inadequate system specification and 30% of the system errors are due to design issues. The SEI National Software Capacity Study [17] (Figure 2) indicates some major factors for system development failure: <table> <thead> <tr> <th></th> <th>Very Serious</th> <th>Serious</th> <th>Nearly Serious</th> <th>Not Serious</th> </tr> </thead> <tbody> <tr> <td>Inadequate</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Specification</td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> **Figure 2. Breakdown of system errors (SEI)** We see that inadequate system specification, lack of user input, and changing requirements are major factors that contribute to the failure of system development. ### 2.1. Classification of Elicitation problems McDermid [14] gives a list of common elicitation problems, which can be classified as: - **Problems of scope.** The boundary of the system is ill-defined, so that unnecessary design information may be given, or necessary design information left out. - **Problems of understanding.** Users have incomplete understanding of their needs; analysts have poor knowledge of the problem domain; user and analyst speak different languages (literally or figuratively); “obvious” information may be omitted; different users may have conflicting needs or perceptions of their needs; requirements are often vaguely expressed, e.g., “user friendly” or “robust”. - **Problems of volatility.** Requirements evolve over time, either because of changing needs or because of changing perceptions by the stakeholders. ### 2.2. Problems of scope Requirements elicitation must begin with an organizational and context analysis to determine the boundary of the target system as well as the objectives of the system. Less ambitious elicitation techniques, not fully addressing this concern, run the risk of producing requirements which are incomplete and potentially unusable, because they do not adhere to the user’s or organization’s true goals for the system. Performing an organizational and context analysis allows these goals to be captured, and then later used to verify that the requirements are indeed usable and correct. Elicitation techniques can be overambitious as well. Elicitation must focus on the creation of requirements and not design activities in order to adequately address users’ concerns. Elicitation strategies which produce requirements in the form of high level designs run the risk of creating requirements which are ambiguous to the user community. These requirements may not be verifiable by the users because they do not adequately understand the design language. Also, requirements expressed as a design are much more likely to incorporate additional decisions not reflecting user or sponsor needs, i.e., the requirements will not be precise and necessary. ### 2.3. Problems of Understanding A Savant Institute study found that “56% of errors in installed systems were due to poor communication between user and analyst in defining requirements and that these types of errors were the most expensive to correct using up to 82% of available staff time” [3]. Problems of understanding during elicitation can lead to requirements which are ambiguous, incomplete, inconsistent, and even incorrect because they do not address the requirements elicitation stakeholders’ true needs. Lack of user input arises when users are not fully aware of their needs or are unable to communicate them. It also arises when analysts and developers fail to ask the necessary questions. When a system needs to be defined, a series of meeting needs to be held consisting of stakeholders. These stakeholders include clients, users, software engineers, system analysts, domain experts, managers etc., Its been assumed that having more number of people in a meeting helps refining the system requirements and brainstorming becomes much effective and easier. But there is one potential problem having more stakeholders in a meeting. The language barrier is considered to be a major problem. When there is no proper common protocol to communicate the whole purpose of meeting together is defeated. Different stakeholders may speak literally different languages, e.g. Chinese and English. But even within the same language, it is notorious that stakeholders from different domains (such as management, manufacturing, marketing, and technical) use the same words with different meanings. When literally different languages are used, there is the additional task of translating the relevant documents. When figuratively different “languages” are used, the problem may not even be recognized. 2.4. Problems of Volatility One primary cause of requirements volatility is that “user needs evolve over time” [3]. The requirements engineering process of eliciting, specifying, and validating should not be executed only once during system development, but rather should be returned to so that the requirements can reflect the new knowledge gained during specification, validation, and subsequent activities. A requirements engineering methodology should be iterative in nature, “so that solutions can be reworked in the light of increased knowledge” [5]. Another cause of requirements volatility is that the requirements are the product of the contributions of many individuals, and these individuals often have conflicting needs and goals. For example, there is often more than one customer, with each customer having different and often contradictory views and interests [9]. Volatility also arises when clients or customers do not fully understand the capabilities and limits of the technology being offered. They often have unrealistic expectations of either the functionality that can be provided, or of the time scale in which the system can be developed. If these expectations are not corrected as early as possible in the elicitation process, the specification will incorporate them and will have to be revised later, at considerable cost. 3. Method Overview The elicitation method proposed in this paper starts by dealing with the problem of lack of user input. The solution involves a series of sessions where users or other stakeholders are exposed to some of the basic idea of what they can expect from the developers and domain experts. They are also taught about the powers and limitations of the computer, and about the availability of other resources. This knowledge helps the users to clarify their needs and develop realistic rather than purely imaginative expectations. Another problem is to define precise system requirements. Precision here depends on the identification and definition of keywords that exactly reflect the user’s needs and wants. Defining a system requirement is one of the hardest parts in any requirements elicitation process. If we get the system requirements wrong, we get the whole system wrong, and building a wrong system makes no sense. So the critical goal of any requirements elicitation is to build meaningful, precise and realistic requirements that reflect the needs of the user. One of the main goals of this paper is to bring forth a new approach for eliciting good system requirements. The proposed approach starts by conducting a series of interviews, structured or unstructured, with various stakeholders. With these interview sessions we record all the keywords used by the participants. They are also given a template document, with which keywords can be classified for future analysis. With the templates filled out by the users and other stakeholders, along with the recorded keywords, domain experts can analyze each keyword and its meaning contextually. Here, a more important thing is to find out in what context each keyword is uttered. If we lose the context, the meaning of the keyword may be lost, and hence we fail to build a good system requirement specification. Once we find the exact contexts in which the keywords are uttered, the approach maps each keyword to every other related keyword. Keywords that are related one way or another are grouped into sets. Each set of keywords can be diagrammatically represented as will be shown in section 4. With these sets prepared we further analyze each of the keywords and its elements to look for other elements in different sets for a relation. This process is repeated until the keywords are partitioned into nonintersecting sets. 3.1 Stages of Elicitation The entire elicitation process can be divided into 11 stages. In this section we describe what each stage means and what elicitation problems it deals with. Stage 1: Collect information about user needs and expectations Lack of user input and unrealistic user expectations have long been considered very serious problems in the requirements elicitation process. This stage addresses these problems by compiling information about: - Stakeholders (developers, software engineers and domain experts) abilities and domain knowledge. - Limitations of computer resources and functionality, and availability of other resources. This careful compilation of information will be used in the next stage to train the clients/users and make them aware of what they can and cannot expect from the software developers. Responsibility: all stakeholders. Stage 2: Train the stakeholders Responsibility: developers. Training the clients, users and managers with the information collected or compiled in the previous stage. makes the users aware of what they can expect from the developers or software engineers involved in the project. At this stage, missing user input can be supplemented and unrealistic expectations of functionality or time scale can be weeded out. Stage 3: Write descriptions of user needs Responsibility: all stakeholders. Each stakeholder involved will write a description of his/her needs for the proposed system and of the exact purpose of the system as the stakeholder understands it. Since the clients and customers are already educated about the computer limitations and availability of resources through the training sessions in the previous stage, they will have fewer, if any, unrealistic expectations about a system they are proposing to build. Realistic expectations at this stage will also reduce volatility, since expectations are less likely to change as the realities of the development process become clearer. Stage 4: Conduct oral interviews with users Responsibility: developers. The interviews at this stage are based on the users’ written descriptions from stage 3, and may be structured or unstructured. The purposes of the interviews are (1) to elaborate and refine the needs and expectations expressed at stage 3, and (2) to identify keywords used by the users, which will be critical to the creation of formal requirements. Stage 5: Map keywords Well-defined operational definitions are essential for building unambiguous system requirements for any system. In this stage, the keywords identified in stage 4 will be analyzed and grouped by keyword mapping in order to frame operational definitions. The keyword mapping technique will be described in section 4. Stage 6: Classify and prioritize system requirements After creating well-defined operational definitions, we have a base for building clear and unambiguous system requirements, and it becomes necessary to prioritize and classify each requirement. The classification is usually based on the project’s cost and schedule. At this stage, we also resolve conflicting expectations among stakeholders, as well as any remaining ambiguity or lack of clarity in the system requirements as produced in stage 5. We propose using Quality Function Deployment (QFD), a requirements process technique, for this purpose. The use of QFD will be described in section 7. Stage 7: Fit the requirements to the domain Forming domain specific requirements has always been a difficult task and this depends on domain experts and knowledge experts. Though the system requirements formed in the previous stage are specific and unambiguous, they may address issues outside the problem domain — which are unnecessary, wasteful, and may hinder the rest of the development process. These issues can be solved only by employing knowledge experts and domain experts. This task is done in this stage. Stage 8: Prototype At this point we believe we have a correct set of requirements. We test them by building a prototype. Are these requirements complete, sufficient and not excessive? What will be the outcome? Will this set of requirements fulfill the needs of clients and other stakeholders? Stage 9: Check the prototype Every prototype built should be checked for its quality; in other words, every system requirement should be tested for its realism, quality, unambiguity, and exactness. With this quality check (repeated as necessary) stakeholders have the opportunity to form even more precise system requirements by eliminating unnecessary information. This will further help developers and software engineers to understand what they are going to build and whether they can fulfill the requirements of the clients. Stage 10: Analyze risks and costs This stage calculates and eliminates unnecessary risks and costs. We propose to use the popular Capability Maturity Model for its analysis. A brief introduction will be given in section 8 about CMM and how its features are used here. Stage 11: Overall analysis Finally we analyze the whole set of system requirements to make sure everything done so far is correct. From this point on, we actually start building the system with careful supervision at each stage of its development process. 4. New Techniques for Requirements Elicitation The approach proposed in the paper presents a standard template consisting of problems in system specifications and flow diagram to solve that problem systematically and methodically. 4.1. Potential Problems in the previous model Traditional requirements elicitation does not clearly explain any of its techniques or employ any of the early software requirements detection techniques available to avoid volatility, a major problem in the elicitation process. Elicitation starts with a data requirement phase which employs structured or unstructured interviews and brainstorming. But this often leads to lack of user input, or unrealistic user input due to poor understanding of resources and time constraints – even though the interviews are structured and developers are prepared to know what they want to ask. One other major problem in requirements elicitation is that when all stakeholders meet together and start interviewing each other, language differences can be a major hindrance. This may lead to misunderstanding between parties, missing obvious information and conflicting views. When data acquisition is not done properly, developers will not know the exact requirements or may misunderstand the requirements, which in the future has to be changed; the problem of volatility involves huge cost. 4.2. New techniques to solve these problems The proposed method addresses each of the above shortcomings. The following subsection explains how we address and solve each issue. 4.2.1. Training sessions to eliminate “lack of user input” and “poor understanding” To avoid the problems of “lack of user input” and poor user understanding, at the beginning of requirements elicitation, stakeholders need to be trained or informed about the developers’ skills, computational abilities, the environment under which developers and other stakeholders are going to work, and what developers can offer to the customers. Most importantly, stakeholders need to be taught about what software engineers cannot offer in the project. This greatly reduces the problem of unrealistic expectations concerning functionality and deadlines. This, in turn, reduces volatility of requirements at the beginning stage. Wrong or unrealistic expectation and unrealistic requirements may later on cause the whole system development process to fail. 4.2.2. Recording keywords Many system development failures occur because the users cannot define their needs precisely, or because developers and domain experts miss “obvious” words that contribute essentially to system requirements. These problems can be avoided largely by recording each keyword spoken by each stakeholder. Stakeholders will also list keywords on a template form with different columns for each class of keywords, e.g.: Behavioral keyword, Functional keyword, Non-functional keyword, purpose keywords, etc. Comparing the templates will show which keyword is mostly uttered by whom and hence identify their interest. If the same keyword appears in different columns, this is one signal of an ambiguity which will have to be resolved. 4.2.3. Pictorial representation of needs and wants to reduce language barriers The usual procedure in multilingual situations is to employ a translator and translate each time somebody communicates with a new language. But whether or not more than one language is used, the problem remains that stakeholders from various domains (technical, management, marketing, manufacturing, etc.) understand words in different ways. Thus, even after we have identified keywords, their meaning is still often ambiguous or vague. The resulting confusion can be greatly reduced by supplementing the keywords with visual images. Images or pictures speak more powerfully than words. Anybody from any part of the world will definitely understand images no matter what language he speaks and no matter how well he is educated. The process of “labeling” each keyword with a picture gives the stakeholders an opportunity to come to agreement about its meaning, and thereafter provides a convenient reminder of the meaning that has been agreed on. The Advantages of using this process are: - Saves time on translating each document written in different languages. - Agreeing on a common pictorial representation avoids conflicting interpretations of the representation. 4.2.4. Keyword Mapping In addition to forming operational definitions, keyword mapping technique will largely avoid the problem of missing “obvious” information, which later may blow up to a stage where system development has to be stopped. 4.2.5. Operational definition extraction Based on the keywords used by the stakeholders, domain experts can extract all those keywords that are specific to the domain. Finally form an operational definitions based on them. Once these definitions about the wants and needs are framed carefully, developers know what they should do which by implementing the well-defined specifications will satisfy the customers/users. The details of keyword mapping and the use of keyword mapping to build operational definitions will be described in the next section. 5. Building operational definitions based on keyword mapping From the customers and other stakeholders’ interview sessions we have recorded their wishes. From this a large amount of keywords are carefully analyzed, collected and organized. These keywords by themselves do not explain or tell us anything. But if we can relate them to each other, we are at least in a better position to define the users’ needs with the precision we need. Starting with the templates filled out by the stakeholders, the keywords are organized into sets according to their type: functional, nonfunctional, performance/reliability, interface, design constraint. Some additional categories are desirable as well: behavioral keywords, which describe user initiatives and responses to the system; and attributes or desired properties of the system or parts of the system. The technique is one-to-one mapping. With all the keywords recorded and represented in sets, we can capture the most obvious things that may be missed if not recorded and represented this way. To do this, the paper applies a technique called keyword mapping (Figure 3). The technique represents each key word as an object or an entity. So each keyword represented as an object should have its own attributes and behaviors. We use these attributes and behaviors to relate them with other keywords. Related keywords, say k1 and k2, form a new set which may be labeled S1. Similarly, a new set S2 is formed when a keyword k3 and keyword k4 are related. The key aspect of the technique is a keyword k1 is associated with a keyword k2 only if they are related in a specific way such as, - Cost - Design - Knowledge domain - Software issues Or any of many others. Once we are left with no more keywords to form groups, we have each set $S_k$ containing only keywords that will form a well-defined requirement for a particular customer or stakeholder. The requirement elicited using this technique eliminates unnecessary information and is thus will be accurate and well defined, leaving software developers a clear map of what they have to do. Each set can be represented as: **Attribute Set** = \{(k1), (k2), (k3), (k4), (k6), (k7)\} **Behavioral Keyword set** = \{(k1), (k2), (k3), (k4)\} **Non-functional Keyword set** = \{(k1), (k2), (k3), (k4)\} **Functional Keyword set** = \{(k1), (k2), (k3), (k4)\} ![Figure 3. Keyword Mapping](image-url) Mapped Keywords representation Map(Att, Behav) = {(k2,k3), (k4,k2)} Map(Non-funct, Att) = {(k4,k3) , (k3,k7)} Mapping Nested Sets Figure 4 shows the next step in the process: mapping nested sets. Map(Map(Attr, Behav), Map (Non-funct, Attr)) = { [(k2,k3), (k3,k7)], [(k4,k2), (k4,k3)] } Figure 4. Mapping Nested Sets Now the keywords k2,k3,k3,k7,k4,k2,k4,k3 form a sentence. Thanks to the previous precautions of training the stakeholders and providing pictorial labels for the keywords, the sentence formed will be a system requirement that has been elicited in a systematic and neat way with - No obvious information omitted - No conflicting views, Ambiguity. - No ill-defined system scopes - No unnecessary information 6. Process Flow The process flow diagram shown in Figure 5 is another representation of the elicitation process, slightly different from the 11-stage process described earlier, but more convenient for the remaining discussion. Figure 5. Method Flow 7. Requirements Elicitation using QFD 7.1. What is QFD? Quality Function Deployment (QFD) is defined by Soto [19] as “a systematic process for motivating a business to focus on its customers.” It is based on market research: understanding customers’ needs and desires, and the effectiveness of relevant products in meeting those needs and desires. In QFD, cross-functional teams identify and resolve the issues involved in developing products, etc., to satisfy their customers. 7.2. Why QFD? Once a team has identified the customers’ wishes, QFD is used for two basic purposes [13]: - To improve the communication of customer needs throughout the organization. - To improve the completeness of specifications and to make them traceable directly to customer wants and needs. QFD requires that representatives of the different organizations involved in producing the product be involved in its definition. Consequently, these representatives discuss the meaning of the customers’ wishes and work together to ensure that they come to a common understanding. Communications throughout the organization are greatly improved. This process will also uncover many issues whose resolution will lead to a more complete specification. QFD is organized around a model called the House of Quality (HOQ), a set of “rooms” encapsulating the various processes necessary to develop a complete and satisfactory product specification. The HOQ is built by in-house teams from various disciplines, under the guidance of a trained QFD facilitator. As mentioned before, a prerequisite for QFD is market research, and the results of market research are the raw data used to build the HOQ. The QFD team then builds the HOQ, room by room. Given one or more specific objectives (e.g., a narrow focus such as “optimize engine performance” or a more global focus such as “optimize overall passenger comfort”), the QFD process starts with obtaining customer requirements through market research. These research results are inputs into the House of Quality. The following is a discussion of each of the “rooms” of the House of Quality and how they are built. **Figure 6. Components of QFD [20]** The “Whats” Room: This room contains the requirements, as identified by the QFD team. “Typically there are many customer requirements, but using a technique called affinity diagramming, the team distills these many requirements into the 20 or 30 most important needs.” In affinity diagramming, the team discusses the initial requirements provided by the users, and clusters them into a smaller number of more general requirements. (This is an appropriate place to apply pictorial labels to keywords, as discussed in section 4.2.3.) The results are placed into the “Whats” room. The Importance Ratings and Customer Competitive Assessment Rooms: When QFD is to be used, market research has to be designed around the expectation that the QFD team will use it. Market research provides information about the varying priority of expressed customer needs (the Importance Ratings room) and about the strengths and weaknesses of both the client’s and competitors’ existing products. Note that the Importance Ratings room is associated with the “Whats” room, while the Customer Competitive Assessment room feeds separately into the HOQ’s relationship matrix. The “Hows” Room: The “Hows” room requires completion of the “Whats” room. In the “Hows” phase, the team develops metrics for success in the “Whats” previously identified. Each “What” requires at least one “How”, and some “Whats” may require more than one. **The Relationships Matrix Room:** After completion of the “Whats”, “Hows”, and Customer Competitive Assessment rooms, it is possible to start building the Relationships Matrix. The team attempts to define the relationship of every what to every How. The relationship may be strong, medium or weak, or there may be no relationship at all. In any event, the matrix must be completed. **The Absolute Score and Relative Score Rooms:** In the Absolute Score and Relative Score rooms, the team (1) “creates a model or hypothesis as to how product performance contributes to customer satisfaction”, and (2) uses the Relationships Matrix and the Customer Importance Ratings to rate the various performance measures (the “Hows”) by their importance to customer satisfaction. In moving from the Whats Room to the Score rooms, the technical members of the team play an increasing role, and predominate in the remaining rooms, though input from all team members is still essential. **The Correlation Matrix Room:** We have noted that users’ expressed requirements often conflict with each other. When this happens, the conflicts are apparent in the “How” metrics. The Correlation Matrix room is where they can be resolved. To cite an example from Squires: “...Perhaps the customer wants a car that is fast, so your team comes up with the “how” of “elapsed time in the quarter mile”. After comparing performance between your car and the competitor's vehicle, you realize that “you blew the doors off the competitor's old crate”. However when you look in the Customer Competitive Assessment room, you see that most of the marketplace perceives the competitor's car as being faster. While you might have chosen one of the correct “hows” to measure performance, it is clear that your single “how” does not completely reflect performance needed to make your car appear faster.” [20] **The Target Values Room:** At this point, the requirements have been identified, evaluated and tested in the preceding rooms. The final set of recommended specifications is placed in the Target Values room. The preceding discussion of QFD has focused on eliciting requirements and developing a product specification. In QFD, this is often called the “phase one” matrix. A similar “phase two” matrix can be applied in the design phase, and even a “phase three” matrix during implementation or manufacturing. As this description makes clear, QFD is a suitable structured process for developing system requirements for almost any product, including software. However, software development is distinctive in many ways, especially those discussed earlier in Section 2. Adapting the QFD procedure to the specific issues of software requirements elicitation and analysis is complex and we are still studying how this can be done. ### 8. Risk Analysis Using CMM The following description is based on Fox [10]. The Capability Maturity Model for Risk Management is a model for describing both the present maturity of risk management processes in an organization, and for studying those processes in order to develop a more mature, i.e., effective, risk management process. The CMM is divided into five maturity levels [10]: 1. **Initial.** The decision support process for managing risk is characterized as ad hoc, and occasionally even chaotic. Remove an individual and the processes may change dramatically. Metrics may accurately measure factors which are significant for customer satisfaction, but customer perception may not match the technical criteria. Comparison of results from the Technical Competitive Assessment and Customer Competitive Assessment rooms can reveal such problems. Another example from Squires: “...Perhaps the customer wants a car that is fast, so your team comes up with the “how” of “elapsed time in the quarter mile”. After comparing performance between your car and the competitor's vehicle, you realize that “you blew the doors off the competitor's old crate”. However when you look in the Customer Competitive Assessment room, you see that most of the marketplace perceives the competitor's car as being faster. While you might have chosen one of the correct “hows” to measure performance, it is clear that your single “how” does not completely reflect performance needed to make your car appear faster.” [20] commiserate with the next individual’s level of ability and experience. 2. **Repeatable.** Basic management processes are established to document the management of the organisation. The necessary process discipline is in place to repeat earlier successes on similar tasks, based on previous experience of the organisation. 3. **Defined.** The process for standardising, documenting, integrating risk management into the normal decision-making processes of the organisation. 4. **Managed.** Detailed measures of management decisions made, the formal process of managing risk and the quality of the risk management (planning, including setting context, risk identification, assessment of risk, evaluation of risk, mitigation of risk to an acceptable, the monitoring of risk and review of the whole process). All the business processes and the output products or services are quantitatively understood and controlled. 5. **Optimizing.** Continuous process improvement is enabled by quantitative feedback from the process and from piloting innovative ideas and technologies to measurement of the public affected by the decision/s. All levels except level 1 are analyzed into components which are used to develop strategies for improved risk management. At level 2, the task is to study the existing risk management process. At level 3, the goal is to work toward a “culture of effective business decision-making processes”, including strategic planning, plans for each unit, corporate education, process integration and development, and effective communication. At level 4, the goal is to develop quantitative measures of the organization’s risks, and at level 5, to implement a continual, measurable process improvement. Each level builds on the previous levels. 9. **Evaluation Methods** We propose in this paper an effective data collection method for evaluating software development methodologies, from definition of the objectives of the data collection to analysis of the results. Any software industry is interested in the analysis of techniques, their integration into a new methodology, and the engineering of that methodology to particular environments [22]. An effective way to evaluate a methodology, understand the environment and refine the methodology for the environment is to collect data that characterizes the methodology and the environment and supplies insight into both. The proposed method will therefore be evaluated by collecting data from domain experts and from other users who have studied the technique thoroughly. Data showing where changes were made, what kinds of changes were made, and the effort involved in making changes can be used to evaluate methodologies, characterize environments and permit the proper engineering of the methodologies for the environments. Users will first be asked to evaluate the technique subjectively, by questions including the following: - (true/false) In current software elicitation, many important things are overlooked. - (true/false) Pictorial representations of keywords help to make their meanings clear and unambiguous. - (true/false) The sentences generated by collecting, mapping and relating keywords can easily be converted to accurate system requirements. - (true/false) The technique eliminates irrelevant, redundant or trivial requirements. - (true/false) When many keywords (e.g. several hundred or thousand) are collected, the mapping process and its results are still manageable. - Is this technique simple? - Is this technique effective? Next, the technique will be applied to actual elicitation situations of various scales, and stakeholders will be asked about their satisfaction with various parameters of the elicitation: - Are the requirements complete? - Are they within the scope of the system? - Have irrelevant, redundant and trivial requirements been avoided? - Do the requirements accurately represent the expressed needs of the users? 10. **Conclusion** The goal of this proposal is to develop a new methodology for improved requirements elicitation. The major problems in most elicitation techniques derive from their imprecision, which leads to vague or even incorrect requirements. Specifically, we have proposed using, - Training of users in the capabilities and limits of the computer and of software developers; • Collection of keywords from stakeholders in all categories; • Pictorial representation of keywords to facilitate agreement on their meaning; • Keyword mapping to generate system requirements; • Quality Function Deployment (QFD) to make sure that requirements are relevant to the task and to the users' needs; • Capability Maturity Model (CMM) to make sure that requirements take into account the risks the system will encounter or generate. With these improvements, we believe that software requirements elicitation can be raised to a new level of rigor and effectiveness. References:
{"Source-Url": "https://www.tamps.cinvestav.mx/~ertello/svam/s04-SWE-SRElicitation.pdf", "len_cl100k_base": 7454, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 31451, "total-output-tokens": 8979, "length": "2e12", "weborganizer": {"__label__adult": 0.0002903938293457031, "__label__art_design": 0.0003135204315185547, "__label__crime_law": 0.00025200843811035156, "__label__education_jobs": 0.0015516281127929688, "__label__entertainment": 4.3392181396484375e-05, "__label__fashion_beauty": 0.00013315677642822266, "__label__finance_business": 0.00040078163146972656, "__label__food_dining": 0.000286102294921875, "__label__games": 0.0004425048828125, "__label__hardware": 0.0004584789276123047, "__label__health": 0.0002841949462890625, "__label__history": 0.00015866756439208984, "__label__home_hobbies": 6.318092346191406e-05, "__label__industrial": 0.000263214111328125, "__label__literature": 0.00025391578674316406, "__label__politics": 0.00017714500427246094, "__label__religion": 0.00029540061950683594, "__label__science_tech": 0.00504302978515625, "__label__social_life": 7.021427154541016e-05, "__label__software": 0.004886627197265625, "__label__software_dev": 0.98388671875, "__label__sports_fitness": 0.00019729137420654297, "__label__transportation": 0.00028443336486816406, "__label__travel": 0.0001493692398071289}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40617, 0.01912]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40617, 0.40627]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40617, 0.93094]], "google_gemma-3-12b-it_contains_pii": [[0, 2775, false], [2775, 7341, null], [7341, 12141, null], [12141, 16346, null], [16346, 20737, null], [20737, 23938, null], [23938, 24918, null], [24918, 28437, null], [28437, 32819, null], [32819, 37144, null], [37144, 40617, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2775, true], [2775, 7341, null], [7341, 12141, null], [12141, 16346, null], [16346, 20737, null], [20737, 23938, null], [23938, 24918, null], [24918, 28437, null], [28437, 32819, null], [32819, 37144, null], [37144, 40617, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40617, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40617, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40617, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40617, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40617, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40617, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40617, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40617, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40617, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40617, null]], "pdf_page_numbers": [[0, 2775, 1], [2775, 7341, 2], [7341, 12141, 3], [12141, 16346, 4], [16346, 20737, 5], [20737, 23938, 6], [23938, 24918, 7], [24918, 28437, 8], [28437, 32819, 9], [32819, 37144, 10], [37144, 40617, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40617, 0.01365]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
86f9706d53a9ab0f20e66d0d877d425ec7b2195c
The BECCAL Experiment Design and Control Software Arnau Prat German Aerospace Center (DLR) Institute for Software Technology D-38108 Braunschweig, Germany arnau.pratisala@dlr.de Jan Sommer German Aerospace Center (DLR) Institute for Software Technology D-38108 Braunschweig, Germany jan.sommer@dlr.de Ayush Mani Nepal German Aerospace Center (DLR) Institute for Software Technology D-38108 Braunschweig, Germany ayush.nepal@dlr.de Tobias Franz German Aerospace Center (DLR) Institute for Software Technology D-38108 Braunschweig, Germany tobias.franz@dlr.de Andreas Gerndt German Aerospace Center (DLR) Institute for Software Technology D-38108 Braunschweig, Germany University of Bremen D-28359 Bremen, Germany andreas.gerndt@dlr.de Hauke Müntinga German Aerospace Center (DLR) Institute for Satellite Geodesy and Inertial Sensing D-28359 Bremen, Germany hauke.muenteringa@dlr.de Daniel Lüdtke German Aerospace Center (DLR) Institute for Software Technology D-38108 Braunschweig, Germany daniel.luedtke@dlr.de Abstract—This paper presents the software responsible for the design and execution of the experiments in the Bose-Einstein Condensate and Cold Atom Laboratory (BECCAL) mission, an experiment with ultra-cold and condensed atoms on the International Space Station. The software consists of two parts: the experiment control software and the experiment design tools. The first corresponds to the software running on the payload and is in charge of controlling and executing the experiments. While the latter are tools used by scientists to create the experiment definition that will be later uploaded to the instrument to be executed. To overcome the challenge of developing software with such complexity, it was decided to follow a model-driven development approach. Several domain-specific languages (DSLs) have been created to allow scientists to describe their experiments in a domain-specific way. These descriptions are then uploaded and executed by different interpreters onboard. The paper details the architecture of the experiment control software and the different modules that compose it, as well as the developed languages and tools used to describe new experiments. The paper also discusses and evaluates some important aspects of the software, such as how resilient it is to failures, as well as the advantages and disadvantages of the selected approach compared to other approaches used in similar missions. The developed software will also be used for the MAIUS-2/3 missions. 1. INTRODUCTION The NASA-DLR Bose-Einstein Condensate and Cold Atomic Laboratory (BECCAL) [1] aims at conducting experiments with ultra-cold and condensed atoms on board the International Space Station (ISS). Its goal is to enable fundamental research as well as advance technological development by satisfying a wide range of experimental needs. The unique microgravity environment on board the ISS will contribute to these goals by providing prolonged times of free fall and observation. The payload operation will be done in collaboration with scientist from many different institutions, and the aim is to enable all participants to design and execute their experiments on the instrument. Finding a common framework to be used by all scientists to develop new experiments for the apparatus can be a difficult task. The developed tools should be powerful enough to create any possible experiment as well as simple enough to be used by any user without in-depth knowledge of the hardware. Additionally, the tools must prevent the user from designing experiments which could harm the apparatus. This paper presents the software tools responsible for the design and execution of the experiments in BECCAL. The experiment control software and the experiment design tools. The first one is the software running on the payload and is in charge of controlling and executing the experiments, while the latter are the tools used by the scientist to create the experiment definitions that will be later uploaded to the instrument to be executed. Both inherit from the software [2] used in the MAIUS-1 sounding rocket [3] mission, the first to create a Bose-Einstein Condensate in space. While in the case of MAIUS-1, the microgravity phase, in which the experiments were executed, was only a few minutes long, in BECCAL this time will be much longer. This will allow for longer and more complex experiments, which the experiment control software will need to support. The main difference of the software with respect to its predecessor is the possibility to add or modify experiment descriptions without the need of recompiling the software. This is achieved by using interpreter engines which interpret textual experiment descriptions and execute them after successful sanity checks. This is a feature that was not needed for MAIUS-1 due to the limited scope of the mission, but for a multi-user facility such as BECCAL, this becomes mandatory. The software implements several other features to meet the new mission requirements, such as support for new hardware compared to the MAIUS-1 mission as well as updating third party frameworks and libraries to their latest version. Also, since the experiment will be a payload on board the ISS, the overall system has to fulfill the necessary safety requirements. To this end, the design and code quality of the overall software have to meet the required standards. The remainder of this paper is organized as follows. Section 2 gives a brief overview of related work. Section 3 describes the experiment control software. Section 4 gives a summary of the languages used to design the experiment. Section 5 introduces the experiment design tools. Section 6 presents some results on some important aspects of the software. Finally, Section 7 states the conclusions and future work. 2. RELATED WORK On-board software for spaceborne experiments is usually written using languages such as C or C++. One of the reasons for this is the flexibility and performance provided by these languages [4]. However, if non-computer experts, such as in this case, need to contribute to the software, by creating new experiments, the use of these languages can become a problem due to their complexity. One solution for this are Domain Specific Languages (DSLs) [5]. Such languages are usually designed for a specific project and have the advantage of introducing only a reduced set of elements. This makes them easy to learn and use. Descriptions written using these languages can then be used to generate code to be compiled with the rest of the software or alternatively they can be executed by an interpreter. These languages often make use of a Model-Based Software Development (MBSD) approach to specify its syntax through a precise and concrete language model. Apart from providing a high level of abstraction and making them platform independent, this approach has other advantages such as reduced manual implementation of interfaces and increase maintainability [6]. The reason for this is because the model becomes the single point of truth and redundancies can be substantially reduced. Modeling languages such as the Unified Modelling Language (UML) and System Modelling Language (SysML) have proven to be invaluable tools for designing complex systems [7] [8]. Modeling languages can have a textual or graphical representation. For the second one, a good example is MATLAB/Simulink, which allows to graphically specify software components to later generate source code from it. Overall, the use of such languages is a good fit for BECCAL, where non computer experts need to collaborate and the tools have to be as accessible as possible. As we will see, BECCAL implements different Domain Specific Languages (DSLs), both graphical and textual to easily create new experiment definitions. These languages are designed specifically for the project at hand, which makes them easier to use due to their reduced set of elements. The presented approach differs from the one used in similar missions such as NASA's Cold Atom Laboratory (CAL) [9] [10]. In CAL, experiments are designed and controlled using LabVIEW taking advantage of its wide availability in industry as well as its easy to use interface. However, such an approach usually requires access to a graphical session on the computer running the experiments, either via direct access or using a remote desktop sharing environment. This costs bandwidth and may be subject to delays, possibly making it difficult to operate remotely. On the other side, our approach offers similar advantages without such burden. The described approach also differs from the one used in MAIUS-1 as mentioned in the introduction. Since experiments are not converted to C++ to be compiled together with the flight software but instead are saved as a textual file to be interpreted on-board. For MAIUS-1, compilation was feasible due to its short flight duration on a sounding rocket. However, a multi-user facility such as BECCAL requires easy uploading and change of experiments. The main advantage of this approach is that it allows adding new experiments without having to restart the software. On the other side, before interpreting them, extensive sanity checks need to be performed in case there could be errors with the experiment descriptions or input parameters. The resilience of the software due to possible errors in the experiment descriptions is evaluated in Section 6. Another alternative used by other complex experiment control software for physical experiments is to use high-level languages such as Python, which may be seen as more accessible and simpler than C or C++. An example of this can be found in the JOKARUS [11] mission, a compact optical iodine frequency reference for a sounding rocket, in which Python was used to program the control software. However, the use of these languages usually implies a higher memory consumption and a decrease in performance compared to low-level languages, which sometimes cannot be afforded. Another interesting example is Orocos [12], an execution environment for building real-time robotics, which allows to control robots by using a different DSL, which are later interpreted using a Lua scripting engine. This approach is very similar to ours, however the use of custom scripting languages instead of a general purpose such as Lua allowed us to tailor the interpreters and syntax of the languages themselves to our needs and optimize it. 3. EXPERIMENT CONTROL SOFTWARE The experiment control software is the software running on the on-board computer of the experiment. It has direct communication with the experiment electronics and its task is to execute the experiment descriptions as well as to record data from the different subsystems. The software is controlled and monitored through the ground control software. Due to the large amount of experiment hardware and domains involved, a model-driven approach was chosen for its development. This model-driven approach is used for the hardware drivers and the experiment description part of the software. For this, a minimal core system of the on-board software provides the necessary interfaces to implement drivers and experiments. These descriptions are generated by the engineers and scientists using different DSLs. For the drivers, the generated code is compiled with the rest of the flight software since the hardware will not change during the experiment lifetime. The experiment descriptions are saved in a textual form to be later interpreted by different interpreters, which are part of the core software. Figure 1: Experiment Control Software Architecture Communication between the different modules is realized by the “Tasking Framework” developed by DLR [13] which has been successfully used in the MAIUS-1 mission and other DLR projects [14]. In this framework, modules are connected through so-called channels (represented as arrows in Figure 1). These are unidirectional and handle a certain datatype that has to fit the input and output slots of the connected modules. In BECCAL these datatypes are called packets. Tasking Framework follows an event-driven approach, which means that when a module receives a new packet a response procedure will be triggered. These modules can be gathered into different groups depending on their functionality. Namely four groups have been identified: Telecommunication, Experiment Execution, Hardware Management, and Data Management. Next, each of these groups is described. The Telecommunication group handles all communication that is exchanged live with the Ground Control Software. It is in charge of accepting telecommands from ground and sending back telemetry packets. The received telecommands are checked for validity and forwarded to the responsible module for further processing. Telemetry data from other software modules is packaged into telemetry packets according to the communication protocol and sent to ground. It is also the responsibility of the Telecommunication group to monitor the state of the ground connection and reestablish it in case the connection is lost. The Experiment Execution group is separated into two levels: sequences and graphs. Sequences are essentially building blocks of the experiment and are used to carry out actions on the experiment hardware. The graph chains multiple sequences into a full experiment cycle and even allow for a certain amount of flow control depending on measured conditions. Both the sequences as well as the graphs are saved as files and are loaded by the Experiment Execution group. The Experiment Execution group contains all software parts necessary to ensure the correct execution order of the experiment. This includes the interpreter for the experiment sequences and graphs as well as the control flow defined therein. The group then passes the actions to the Hardware Management group for execution. The Data Management group processes all raw data that is collected from the hardware. This group does not have direct access to the hardware and it receives the raw data from the Hardware Management group. The group converts the incoming raw data into the respective physical units. Additionally, to the conversion of raw numeric data, the Data Management module is also responsible for downscaling images for the live telemetry downlink. The processed data is then distributed threefold: It is partly fed back to the Experiment Execution group to determine the experiment flow. Parts are sent to the Telecommunication group and then processed as live telemetry data for the ground station. This also includes low resolution versions of the acquired images. Finally, the full set of data, including full resolution images, is saved to the file system and can be downloaded during idle periods of the experiment. A special group, which does not appear in Figure 1. It is the Watchdog group, which is not part of the experiment control system but of the Failure Recovery System. The main purpose of the module is to monitor the remaining groups and report and restart them if one of it is failing. To this end, this uses a Failure Detection Isolation and Recovery (FDIR) strategy. The on-board computer running the experiment control software uses a standard Linux operating system. Uploading the experiment files and download of scientific data can be done using standard file transfer protocols like Secure File Transfer Protocol (SFTP). The system also allows carrying out maintenance such as software update through direct access to the computer using protocols such as SSH. 4. EXPERIMENT LANGUAGES This section goes into more details regarding the different languages used to describe experiments. In total five different languages have been developed, which can be split into two domains: electronic domain and experiment domain. The languages corresponding to the electronics domain are used to provide hardware definitions that will be used to generate source code. Whereas the languages corresponding to the experiment domain are used to describe the experiments to be executed on the hardware and are the languages that this paper will focus on. This domain is formed by three DSLs. First, is the Sequence DSL, which is an abstract description of the behavior of a single step of the experiment. Additionally, to prevent repeating code, Sequences are subdivided into Subsequences as many parts of sequences will be repeated and only differ by parameters or timing. This corresponds to the Subsequence DSL. The last layer of the model is a graphical representation of the experiment flow called Experiment Execution Graph, which is designed as a binary decision graph. This can be seen in Figure 2. ![Experiment Execution Graph](image) **Figure 2: Developed Domain Specific Languages** The syntax of the Sequences, Subsequences and generated experiment execution graphs is based on YAML (YAML Ain’t Markup Language). The main reasons why it was chosen was because it was found to have a good trade-off between human and machine readability. Although it is possible to edit these files manually, it is intended that the creation of new experiments is done through the experiment design tools presented in Section 5. These tools provide verification and validation, warning the user of possible errors, and simplify the overall process of creating new experiments. Communication of the sequences with the electronics is done through the so-called channels and they are the basic elements to interact with the experiment hardware. There are two types of channels: output channels, which change some physical parameter (actuators) and input channels, which measure a certain parameter (sensors). In a sense, sequences and subsequences can be seen as a precise timing description of the state of the apparatus, where each step describes which and how a channel is modified. On the other side, graphs can use input channels to read the current state of the apparatus and decide what next step to take. ![Experiment Domain](image) ![Electronic Domain](image) **Figure 3: Example Subsequence File** | name: TestSubseq | description: 'An example test subsequence' | date: '01.01.2021' | init: false | parameters: | Analog0: {default: 0.0, max: 10.0, min: -10.0} | Analog1: {default: 0.0, max: 10.0, min: -10.0} | Time: {default: 10.0, max: 500.0, min: 0.0} | subsequence: | - time: 0.0 | slotname: SetToZeroAnalog12 | channels: | - {name: AnalogOut00, value: 0.0} | - {name: AnalogOut01, value: 0.0} | - {name: AnalogOut02, value: 0.0} | - time: 0.1 + Time | slotname: SetValuesAnalog12 | channels: | - {name: AnalogOut00, value: Analog1} | - {name: AnalogOut01, value: Analog2} Similar to a sequence, a subsequence may contain zero or more parameters, and slots elements. Each of them is described below. Also, each subsequence should have a unique name. In the same way as the sequence, a subsequence starts with its metadata fields, i.e., its unique name and optionally description and date. The init option is used to specify if the channels will be called in init mode. While sequences are comprised of subsequences, subsequences are comprised of slots. Each slot allows one to specify a set of channels that needs to be called at a certain period of time. The value for these channels can be a parameter, float or bool. As seen in Section 2, the use of DSLs has several advantages. However, it also limits what is possible to do with them. Thus, there are certain sequences that are not realizable using this syntax and had to be coded directly using C++. These sequences are called Untimed Sequences and are compiled together with the flight software. The normal sequences are known as Timed Sequences. An example of Untimed Sequences is the sequence in charge of locking the lasers to the right frequency. It is foreseen that a pool of commonly used sequences and subsequences (both timed and untimed) will be shared with the experiment developers containing basic functionalities such as taking a picture. Then this pool can be used as basic blocks to create more complex experiments using them as basic elements for the graphs. While additional timed sequences will be able to be created using the experiment editors, new untimed sequences will need to be coded by the payload developers. The graphs on the other side are graphical representations designed as a binary decision graph. Similar to UML activity diagrams, graphs are represented by boxes and decision points. Each decision point is shaped like a rhombus and has a binary output (true or false). The output will depend on the evaluation of the expression inside the decision point. Graphs can have different types of boxes, these are: assignment boxes, where an assignment to a parameter can be done; sequence boxes, which can call a given sequence; subgraph boxes, which can call another graph and scan-fit blocks, which allow repeating a sequence changing the value. of different parameters that the sequence takes. Depending on the type of box it will have a different color or shape. Elements are connected together through arrows and each element can only be connected to another element. Figure 4 visualizes an example sequence-graph with two entry-points. A graph can be started from either of these entry-points, and a graph execution terminates at an end-point. Apart from sequences and control blocks, the graphs can also include other special components. One of the most important are the so-called scan-fit blocks. These allow to scan a sequence with different values for certain parameters and optimize them by analyzing the output of the sequence being run. These have been substantially improved with respect to the previous version flown in MAIUS-1 allowing more complex scans and optimization of multiple parameters simultaneously. name: GraphTest_conf1 description: 'An example test graph' date: '02/09/2020 10:36:30' globals: - glob1: {default: 1.0, max: 2.0, min: -1.0} - glob2: {default: -1.0, max: 2.0, min: -1.0} points: - (id: 0x00000000, next: 0x00000010) - (id: 0xFFFFFFFF, next: 0xFFFFFFFF) - (id: 0x00000001, next: 0x00000012) controls: - (id: 0x00000104, condition: 'glob1 < measurement(NTC_2DCoil)', nextTrue: 0x00000105, nextFalse: 0xFFFFFFFF) - (id: 0x00000102, condition: 'glob2 := 2.0', nextTrue: 0x00000104, nextFalse: 0x00000104) sequences: - (id: 0x00000105, name: SI_2, hash: 0x99f5d140, parameters: [100.0, 1.0, 1.0, 800.0, 800.0], next: 0xFFFFFFFF) subgraphs: - (id: 0x00000101, name: scanfit/GlobTest_conf1_SF_0_0x09AC94EA, next: 0x00000105) Figure 5: A Generated Flow File Example In order to be interpreted, graphs are translated into so called flow files. Similar to sequences, flow files are also based on YAML and they are an element representation of graphs. Figure 5 shows an example of one of these files. All the elements are grouped based on these basic types. In this case, entry points, sequences, subgraphs and control blocks. Assignment blocks are translated into control blocks, and scan-fits are generated into subgraphs in which basic elements emulates the behavior. 5. EXPERIMENT DESIGN TOOLS The experiment design tools are used by the scientist to design new experiments. They support the scientist with designing a formally correct experiment which later can be uploaded and executed on the payload. There are two main tools: the Sequence GUI and the Experiment Editor. Both software packages provide graphical user interfaces to assists physicists to design complex experiments. The Sequence GUI has been used to design the Sequences and the Subsequences, whereas the Experiment Editor is used to design the experiment execution graphs. In addition, the Sequence GUI can also be used to locally control the experiments by directly uploading sequences to the apparatus. Both tools are foreseen to be delivered to the scientists as a single package. A more detailed description of both tools is given below. Figure 6 shows a screenshot of the Sequence GUI, where values for different channels of a subsequence are displayed with GUI controls inside slots, allowing for easy editing of subsequences. Triggers and digital channels are displayed as buttons, while inputs for analog channels allow floating point numbers with optional parameters. On the top, subsequence parameters can be added, removed or changed. An additional GUI element shown in Figure 7 allows arranging and parameterizing subsequences into sequences. The Sequence GUI is intended for the design of sequences and subsequences, while experiment execution graphs are designed using the Experiment Editor explained below. Since sequences can only write to analog and digital items but not read from them, they cannot react to the current state of the experiment. This behavior is achieved through the experiment execution graphs by passing different parameters to the sequences they call depending on the current execution state. Input errors such as invalid values for physical channels or overlapping subsequences are checked at runtime and immediate feedback is given to the user with details about the errors. These checks are powered by the same hardware definitions that are used by the experiment control software. Therefore the hardware definitions are used as the single point of truth throughout all software tools. The Sequence GUI is written in Python with Qt bindings for the GUI elements, which allows for high-level graphics support as well as easy debugging and extensibility by scientists. The experiment editor is based on Java/Eclipse and provides installable features with plugins. This modular approach allows to have an incremental development workflow, so that features could be added or updated conveniently as per the new requirements from the scientists. The experiment editor follows a model-driven development methodology and is built on top of Virtual Satellite 4 (VirSat4) [15], an open source software for model-based systems engineering (MBSE). The MBSE approach in the development increases productivity by allowing source code, test files, as well as documentations to be generated automatically from the data model of the system. To that end, the experiment editor provides textual as well as graphical DSLs to describe and configure the model i.e. the experiment execution graph. Moreover, through the import mechanism sequences and hardware DSLs can be linked to the model. New textual DSLs have been developed to define condition expressions, assignment expressions, and the scan-fit operation. Figure 8 shows a screenshot of the experiment editor GUI, where an example experiment execution graph can be seen. An example scan-fit operation using the respective DSL can be seen in the bottom panel. The panel on the left shows the project tree and the one on the right shows the palette. The user can choose elements from the palette and click anywhere on the graph to create it and connect them using the connectors. All graphs start at an entry point and end at the endpoint. Each of these blocks can be customized individually by opening them in an editor. A double-click on the DSL blocks (assignment, control, and scan-fit) opens the respective DSL textual editor, whereas, for other blocks the respective editor GUI is open. A graph instance can be configured by creating a graph configuration for it. A graph configuration holds an instance of all global parameters associated with the graph. Initial values of the global parameters can be assigned in the graph configuration. The flow files are generated from the graph configuration triggering a generator. A flow file is generated for the graph and all other graphs which are subgraphed by the configured experiment execution graph. Furthermore, all scan-fit operations are treated as a subgraph and they are also serialized into individual flow files. 6. RESULTS The BECCAL instrument is scheduled to be launched and integrated on the ISS in 2024 and planned to be operated for several years. In order to support such long lead times, the experiment control software as well as the experiment design tools need to be developed with long time maintainability in mind. For the experiment control software, this was comparably easy to achieve. It mostly depends on a standard C++ compiler and libraries provided by the operating system, which will be available given common support schedules of commercial Linux operating systems. For the experiment design tools, this was more difficult to achieve since GUI components are often subject to constant progress and changes. We therefore use Virtual Satellite as a base-platform for our new tools, which is developed by DLR. It is a strategic software product for DLR’s model-based systems engineering effort which ensures long-term maintenance. The GUI elements for the Experiment Editor to create and manipulate graphs had to be re-implemented based on Graphiti, a more modern and future-proof framework than the Eclipse Modeling Framework (EMF) used in MAIUS-1. Similarly, the Sequence GUI is updated to the most recent version of Python and corresponding Qt bindings. In both cases, care was taken to keep continuity in the user interface. A first version of the new BECCAL software, both experiment control software and experiment design tools, have been disseminated to scientists working with a laboratory setup. The received feedback from the experts who use the tools through different interviews is very positive. Although the user interface of the new experiment design tools were re-implemented in large parts compared to MAIUS-1, only little time was necessary to train the users to work with the new user interface. It is now already part of the daily work in the laboratory. The round-trip time for experiment changes has been decreased significantly. Previously, when changes were introduced to sequences or experiment execution graphs, the experiment control software needed to be recompiled, which could take up to 20 min in worst-case scenarios. Recompilation is now only necessary if changes were made to the control electronics of the experiment which occurs very rarely in the laboratory and will not occur at all for the final BECCAL instrument. Given the significant increase in complexity of BECCAL compared to earlier instruments like MAIUS-1, also longer and more complex test campaigns in the laboratories are to be expected. The achieved time savings in the daily work for the scientists will facilitate this work greatly. BECCAL is developed with an international collaboration of scientists for the experiment design in mind. Being able to exchange knowledge and experience as well as trace changes and contributions is therefore necessary. With the new BECCAL software, all experiment input data, i.e., sequences, subsequences, and experiment execution graphs are now available as simple text files in YAML format. That means, common tools for distribution, version control, and data comparison known from the software development domain can be used to establish the framework for the collaboration. The YAML format also ensures a certain level of human readability of all input data, giving the chance for manual check, if necessary. A big change with respect to the MAIUS-1 software is the fact of using custom interpreters for the DSLs. An interesting metric is to see how resilient is our software to corrupted or invalid experiments descriptions as well as invalid input values to the experiment. In MAIUS-1, if there was a problem with an experiment definition, this could be notified by the compiler. However, in BECCAL sanity checks and validation have to be performed on the fly. In the case of BECCAL, we can analyze three scenarios. A first one in which an experiment definition files (sequence, subsequence or graph) is corrupted and cannot be processed; a second one, in which an out-of-bounds value for a parameter is passed to an experiment; and finally, the case in which a graph goes into an error state due to the dynamics of the experiment outputting wrong values or entering into an infinite loop. The first scenario is easily solved by using a checksum in the experiment definitions. The experiment control software computes the hash for the file and checks whether it matches the provided one. In case the hashes do not match, the definition is deemed corrupted, the experiment is not executed and a warning is sent to the experiment operator. The second scenario is handled on the fly through validators implemented in the interpreters. When assigning a value to a parameter, the experiment control software first checks whether the value is between the defined maximum and minimum bounds. If yes, the value is assigned to the parameter. And if not, the default value is assigned and a warning message is sent to the operator. The third case can only be checked through simulation, either by running it in a simulator or in one of the planned ground test beds. Future improvement could include to perform a model checking analysis on the graph. Every experiment needs to be tested before being uploaded to the ISS, since there are rare combinations of sequences which could have the potential to create strong heating and potentially degrading the experiment performance. For this reason, experiments will only be allowed to run on the ISS if they have been tested and qualified by one of the ground test-beds. For qualification, an operational procedure is in place which ensures that potentially damaging experiment configurations will not be allowed to be transmitted to the BECCAL instrument on-board the ISS. 7. CONCLUSION In this paper, we presented the software responsible for the design and execution of the experiments in the BECCAL mission. This is composed of two parts: the experiment control software, which is the software running on the on-board computer of the apparatus and is in charge of executing the experiments, and the experiment design tools, which are the tools used by the scientist to design new experiments. Both inherit from the MAIUS-1 software and the main novelty is the possibility to add and execute new experiments on the fly, this is done by using an interpreter engine which interprets and executes the experiment definitions without the need of recompile and restarting the software. At the moment, BECCAL is expected to fly in 2024. And several improvements will still be made to the software. However, part of the software will already be tested for the MAIUS-2/3 missions, which are expected to fly in 2022 and 2023 respectively. ACKNOWLEDGMENTS This work is partly supported by the German Space Agency (DLR) with funds provided by the Federal Ministry for Economic Affairs and Energy (BMWi) due to an enactment of the German Bundestag under Grant Nos. 50WM1131-1137, DLR50WP1431-1435, 50WP1552-1557, 50WP1700-1706 and 50WM1952-1957. We would also like to thank the BECCAL and MAIUS teams for their contribution and support. REFERENCES **Biography** **Arnau Prat** received his B.S. degree in electronics systems engineering and M.S. degree in telecommunications engineering from the Polytechnic University of Catalonia (UPC), Barcelona, Spain in 2015 and 2017 respectively. He is currently a research scientist at the German Aerospace Center (DLR) in the department of Software for Space Systems and Interactive Visualization since 2018 where he is involved in the development of on-board software for space mission. From 2016 to 2017, he was a research assistant with the department of Signal Theory and Communications, UPC, Barcelona, Spain, where he was involved in a terahertz radar system. In 2017 he was a visiting student with the department of Mechanical and Aerospace Engineering, Cornell University, Ithaca, NY, USA, where he worked on an intelligent cognitive assistant for space applications. **Jan Sommer** received his M.Sc. in Space Science and Engineering in 2013 Technical University of Luleå. After a traineeship with the European Space Agency in the section for software engineering he is now with the German Aerospace Center (DLR) in the department “Software for Space Systems and Interactive Visualization” since 2015 where he is active in the development of on-board software for spacecraft missions. His current research interests include the application of model-driven software development methods for on-board software development. **Ayush Mani Nepal** received his M.Sc. degree in Computational Sciences in Engineering (CSE) with a major in Electrical Engineering from the Technical University of Braunschweig in 2019. He joined the department of Software for Space Systems and Interactive Visualization at the German Aerospace Center (DLR) during his Masters in year 2017 and has since been active in the development of model-driven software engineering tools for space systems. After writing the Master thesis in 2019, he is working as a scientific researcher in the same department at DLR. His main research interests include machine learning for space domain applications. Tobias Franz received his M.Sc degree in Computer Science from the Technical University of Braunschweig in 2018. He joined the Institute for Software Technology at German Aerospace Center (DLR) in 2012 as part of a university program, where he was active in area of model-driven software development for embedded systems. Currently he is a research scientist with interests in model-based systems engineering for space systems. Hauke Müntinga received his diploma in Physics from the University of Oldenburg in 2008. He then joined the Center of Applied Space Technology and Microgravity at the University of Bremen, where he worked on quantum optical experiments in microgravity in drop-tower and sounding-rocket experiments. In 2019, he received his doctorate in Experimental Physics. In 2020, he joined the Institute for Satellite Geodesy and Inertial Sensing at the German Aerospace Center (DLR), where he develops experiment control software for spaceborne experiments and simulations of quantum sensors. Andreas Gerndt is the head of the department “Software for Space Systems and Interactive Visualization” at the German Aerospace Center (DLR). He received his degree in computer science from Technical University, Darmstadt, Germany in 1993. In the position of a research scientist, he also worked at the Fraunhofer Institute for Computer Graphics (IGD) in Germany. Thereafter, he was a software engineer for several companies with focus on Software Engineering and Computer Graphics. In 1999 he continued his studies in Virtual Reality and Scientific Visualization at RWTH Aachen University, Germany, where he received his doctoral degree in computer science. After two years of interdisciplinary research activities as a postdoctoral fellow at the University of Louisiana, Lafayette, USA, he returned to Germany in 2008 to work for DLR in the domain of aerospace software research. Since 2019, he is also Professor in High-Performance Visualization at University of Bremen, Germany. Daniel Lüdtke received the diploma degree Dipl.-Ing. in Computer Engineering from Technische Universität Berlin (Germany) in 2003. He worked as a research assistant at the department of Computer Engineering and Microelectronics, TU Berlin. He joined the German Aerospace Center (DLR), Institute for Software Technology in 2010. Since 2012 he is managing the research group Onboard Software Systems and is vice head of the department Software for Space Systems and Interactive Visualization. His current research interests include model-driven software engineering for space systems with an emphasis on reconfigurable embedded systems.
{"Source-Url": "https://elib.dlr.de/142735/1/paper.pdf", "len_cl100k_base": 8064, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 29359, "total-output-tokens": 10305, "length": "2e12", "weborganizer": {"__label__adult": 0.00034236907958984375, "__label__art_design": 0.0005612373352050781, "__label__crime_law": 0.0002942085266113281, "__label__education_jobs": 0.0016412734985351562, "__label__entertainment": 0.00013649463653564453, "__label__fashion_beauty": 0.00018739700317382812, "__label__finance_business": 0.0002567768096923828, "__label__food_dining": 0.0004620552062988281, "__label__games": 0.0008721351623535156, "__label__hardware": 0.0025691986083984375, "__label__health": 0.0004925727844238281, "__label__history": 0.0005192756652832031, "__label__home_hobbies": 0.00016129016876220703, "__label__industrial": 0.0007734298706054688, "__label__literature": 0.0002453327178955078, "__label__politics": 0.00028514862060546875, "__label__religion": 0.0005488395690917969, "__label__science_tech": 0.1824951171875, "__label__social_life": 0.0001329183578491211, "__label__software": 0.0160369873046875, "__label__software_dev": 0.78955078125, "__label__sports_fitness": 0.0004014968872070313, "__label__transportation": 0.0009260177612304688, "__label__travel": 0.00023949146270751953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43830, 0.05979]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43830, 0.83145]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43830, 0.92235]], "google_gemma-3-12b-it_contains_pii": [[0, 4649, false], [4649, 11491, null], [11491, 16128, null], [16128, 21047, null], [21047, 25110, null], [25110, 30584, null], [30584, 36796, null], [36796, 41200, null], [41200, 43830, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4649, true], [4649, 11491, null], [11491, 16128, null], [16128, 21047, null], [21047, 25110, null], [25110, 30584, null], [30584, 36796, null], [36796, 41200, null], [41200, 43830, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 43830, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43830, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43830, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43830, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43830, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43830, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43830, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43830, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43830, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43830, null]], "pdf_page_numbers": [[0, 4649, 1], [4649, 11491, 2], [11491, 16128, 3], [16128, 21047, 4], [21047, 25110, 5], [25110, 30584, 6], [30584, 36796, 7], [36796, 41200, 8], [41200, 43830, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43830, 0.0]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
1bee8ea0bb75d45725ef3d75417cc2bc192bf080
Transforming Source Code to Mathematical Relations for Performance Evaluation Habib Izadkhah 1Department of Computer Science, Faculty of Mathematical Sciences, University of Tabriz, Tabriz, Iran Abstract – Assessing software quality attributes (such as performance, reliability, and security) from source code is of the utmost importance. The performance of a software system can be improved by its parallel and distributed execution. The aim of the parallel and distributed execution is to speed up by providing the maximum possible concurrency in executing the distributed segments. It is a well known fact that distributing a program cannot be always caused speeding up the execution of it; in some cases, this distribution can have negative effects on the running time of the program. Therefore, before distributing a source code, it should be specified whether its distribution could cause maximum possible concurrency or not. The existing methods and tools cannot achieve this aim from the source code. In this paper, we propose a mathematical relationship for object oriented programs that statically analyze the program by verifying the type of synchronous and asynchronous calls inside the source code. Then, we model the invocations of the software methods by Discrete Time Markov Chains (DTMC). Using the properties of DTMC and the proposed mathematical relationship, we will determine whether or not the source code can be distributed on homogeneous processors. The experimental results showed that we can specify whether the program is distributable or not, before deploying it on the distributed systems. Keywords: Distributed Software Systems, Source Code, Speedup, Discrete Time Markov Chains (Received: 18.05.2015; Revised: 21.07.2015; Published: 21.09.2015) 1 Introduction The need for high speed computation in large-scale scientific applications for analyzing complex scientific problems is very high, so that the common computers would not be able to satisfy it. Therefore, nowadays, using the distributed systems and processing power of numerous processors or cores to reach the favorable speed is known as a fact [1]. Yet, as a fact, creating a large scale distributed program is always more difficult than creating a non-distributed program with the same functionality, as the creation of a distributed system can change into a tedious and error-prone task. Since the computational programs have many computations, so its execution requires more time. Therefore, if a program does not have the ability to distribute, there will be a lot of waste time. The most important time of a distributed program is invocation or communication time of their methods. These calls spend the most execution time. Certainly, by distributing a program, if two classes of it can be distributed on two different machines, the invocations between those classes will turn into the remote calls. As reference [2] specifies, in some cases, the program distribution can have negative effects on the running time of the program. When there are many calls between two methods, the network traffic increases and as a result, efficiency of the distributed program will be lower than the initial sequential program. So, regarding that constructing the distributed program from the source code is complex and time consuming, it is better to predict whether the source code is distributable or not, before distributing a program on the machines. None of the existing methods and tools can to achieve this goal from source code. 1.1 The Problem and the Claim The overall problem addressed in this paper is to specify whether the source code has the potential for parallelization on homogeneous processors; i.e., in case of distribution, whether it brings the maximum concurrency compared to the sequential mode. We claim that it is possible to provide a solution to the mentioned problem by doing the following tasks: (1) Modeling software's method invocations by Markov chains as (described in section III) as : - Markov chains nodes represent methods and edges between nodes represent calls between methods, weight of the edges in Markov chains, shows the number of calls between the methods. (2) Determine the maximum potential of distributability of each method (described in section III) (3) Determine the expected performance of the source code from obtained Markov chain (described in section III) (4) Compute the speedup. Speedup is defined as the execution time of a sequential program divided by the execution time of a parallel program that computes the same result. In particular, Speedup = T_s/T_p where T_s the sequential time and T_p is the expected performance. 1.2 The Paper Outline The other sections of the paper are organized as follows: A literature review on the researches conducted by others is discussed in section II. In section III, we propose a mathematical relation of time estimation by which the potential for distribution of the source code can be specified. Case study is discussed in sections IV. At the end, section V deals with conclusions and future works. 2 Related Work and Background The complicated computational applications cannot be executed in an acceptable time on the computation machine, so they should be divided into small tasks. We can use distributed or multiprocessor systems for executing these tasks. Nowadays, most distributed and multiprocessor tools use scheduling methods for distribution. The aim of scheduling is execution of a program on several processors such that the time of execution of the whole program will be minimal, considering the time of tasks and communication time between the processors [3]. The scheduling methods can be divided into two groups; including those which can assurance the quality of service, and those which cannot. The former scheduling systems are preferred to the latter ones. CONDOR [4], SGE [5], PBS [6] and LSF [7] can be referred to as some of the most popular and widely used scheduling systems. These scheduling systems do not guarantee the service quality. These tools perform the scheduling only at the job level and not at applications’. Unlike the above systems, there are some which observe the service quality in scheduling. Such systems observe Job Characteristics, Planning in Scheduling, Rescheduling and Scheduling Optimization in their scheduling. AppleS [8], GraDS [9] and Nimrod/G [10] are among the most famous systems of this kind. Moreover, none of the aforementioned schedulers can predict whether an offered program has the potential to become parallelized, or whether speedup can be achieved in case of parallelization. Also, a tool called DAGC is presented to find the optimal architecture distribution [11]. DAGC uses clustering method for finding optimal architecture distribution. The tool uses a mathematical relation to measure the quality of the obtained clusters. The main problem in mathematical relation used in this tool and such tools is described above it does not have the ability to determine whether a program has the capability of being parallel or not. In the previous work [12], we proposed an analytical model for determining distributability of a specific method. However, our method in the previous work cannot determine overall distributability of a program; also, the effectiveness of each method is not considered in the distribution of it. In this research, we want to determine the overall distributability of a program using DTMC considering the effectiveness of each method. 2.1 Overview of Discrete Time Markov Chains In this section, we discuss Discrete Time Markov Chains (DTMCs), which we use to model the source code’s invocations [13]. A DTMC is described by its states and transition probabilities between the states; where we indicate the transition probabilities between the states as one-step transition probability matrix. The one-step transition probability is the probability that the process, when in state i at time n, will next transition to state j at time n + 1. We write: \[ P_{ij}(n) = P(X_{n+1} = j | X_n = i). \] Note that all the elements in a row of P add up to 1 and each of the \( P_{ij} \)'s lie in the range [0, 1]. For our purpose, we use absorbing DTMC. One DTMC is called absorbing if at least one state has no outgoing transition. Each DTMC with several final states can be converted into an absorbing DTMC. It is performed by adding a final state to DTMC. Next, a transition is drawn to the added absorbing state from all the final states available in DTMC. We can partition the transition probability matrix of an absorbing DTMC as: \[ P = \begin{bmatrix} 1 & 0 \\ C & Q \end{bmatrix}. \] If the DTMC has \( n \) states with \( m \) absorbing states, \( Q \) would be a \((n-m) \times (n-m)\) sub-stochastic matrix (with at least one row sum < 1) describing the probabilities of transition only between transient states, \( 1 \) being a \( m \times m \) identity matrix, \( 0 \) would be an \( n \times (n\times m) \) matrix of zeros, and \( C \) would be an \((n-m) \times m\) matrix describing the probabilities of transition between transient states and absorbing state. The \((i,j)\)-th entry of \( Q^k \) denotes the probability of arriving to state \( s_j \) after exactly \( k \) steps. starting from state $s_i$. Hence the inverse matrix $(I - Q)^{-1}$ exists. This is called the fundamental matrix $F$: $$ (3) \quad F = (I - Q)^{-1} = I + Q + Q^2 + Q^3 + \cdots \sum_{i=0}^{\infty} Q^i. $$ Let $X_{i,j}$ represent the number of visits to state $j$ starting from the state $i$ before process is absorbed. It can be shown that the expected number of visits to state $j$ will be the number of visits that start from state $i$ (i.e., $E[X_{i,j}]$), before entering an absorbing state is given by the $(i,j)$-th entry of the fundamental matrix $F$ [14, 15]. So $$ (4) \quad E[X_{i,j}] = m_{i,j}, $$ $m_{i,j}$ is the $(i,j)$-th entry of the fundamental matrix $F$. The variance of the expected number of visits could also be computed using the fundamental matrix. Let $\sigma_{i,j}$ denote the variance of the number of visits to the state $j$ starting from state $i$. Define $F_D = [md_{i,j}]$ such that: $$ (5) \quad md_{i,j} = \begin{cases} m_{i,j} & \text{if } i = j \\ 0 & \text{otherwise} \end{cases}. $$ In other words, $F_D$ represents a diagonal matrix with the diagonal entries the same as that of $F$. If we define $F_2 = [m^2_{i,j}]$, we have: $$ (6) \quad \sigma^2 = F(2F_D - I) = F_2. $$ Hence: $$ (7) \quad \text{Var}[X_{i,j}] = \sigma^2_{i,j}. $$ 3 Predicting Performance Of A Source Code In this section we describe our approach for modeling a software system that method invocations are represented by an absorbing DTMC, such that DTMC states represent the software methods, and the transitions between states represent transfer of control from one method to another. We assume that the system consists of $n$ methods, and has a single initial state denoted by 1, and a single absorbing or exit state denoted by $n$. Consider Fig. 1. Numbers on edges indicate the probability of movement from one method to another method. In this paper the probability to go from method $x$ to method $y$ is computed as number of method call from $x$ to $y$ / total number of out method call of $x$ (i.e. fan out)$. The method invocations of the source code are given by the one-step transition probability matrix $P$. ![Figure 1. Modelling method invocations for a sample program with DTMC](image) Equation (8) shows the one-step transition probability matrix $P$ for Figure 1. $$ (8) \quad P = \begin{bmatrix} 0 & 0.25 & 0 & 0 & 0 & 0 \\ 0 & 0.25 & 0 & 0.25 & 0 & 0.25 \\ 0 & 0 & 0 & 0 & 0.5 & 0.5 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \end{bmatrix}. $$ Let $PD_{i}$ denotes the potential of distributability of method $i$ that indicated by node $i$ in the DTMC. During a single execution, the performance of the software, denoted by the random variable $P$ is given by: $$ (9) \quad P = \prod_{i}^{n} PD_{i}^{X_{1,i}}. $$ where $X_{1,i}$ denotes the number of visits to the transient state $i$ starting from the state 1. Therefore, the expected performance of a software system is as follows: $$ (10) \quad E[P] = E\left[\prod_{i}^{n} PD_{i}^{X_{1,i}}\right] = \prod_{i}^{n} E\left[PD_{i}^{X_{1,i}}\right] = \prod_{i}^{n} P. $$ Thus to obtain the expected performance of the source code, we need to obtain $E[PD_{i}^{X_{1,i}}]$, which is the expected potential of distributability of method $i$ for a single run of the software. Using the Taylor series expansion, $E[PD_{i}^{X_{1,i}}]$ in relation 10 can be written as relation 11. $$ (11) \quad E[PD_{i}^{X_{1,i}}] = PD_{i}^{E[X_{1,i}]} + \frac{1}{2}(PD_{i}^{E[X_{1,i}]})(\log PD_{i})^{2} Var PD_{i}. $$ Let $E[X_{i,j}] = m_{i,j}$ and $Var[X_{i,j}] = \sigma^2_{i,j}$. relation (11) may be written as: (12) \[ E \left[ PD_{i}^{X_{i}} \right] = PD_{i}^{m_{i}} + \frac{1}{2} (PD_{i}^{m_{i}})(\log PD_{i})^{2} \sigma_{i,1}^{2} . \] \( m_{i,1} \) is the expected number of visits to state \( i \) and \( \sigma_{i,1}^{2} \) is the variance of the number of visits to state \( i \). \( m_{i,1} \) and \( \sigma_{i,1}^{2} \) can be obtained from DTMC analysis. Relation (10) can thus be written as: (13) \[ E[P] = \left[ \prod_{i}^{m} (PD_{i}^{m_{i}} + \frac{1}{2} (PD_{i}^{m_{i}})(\log PD_{i})^{2} \sigma_{i,1}^{2}) \right] . \] 3.1 Computing Potential Of Distributability Of Method \( i \) In this section, we are going to determine Potential of Distributability (PD) of each method to determine overall performance (i.e., \( P \)) of a program. For achieve this aim, we determine \( PD_{i} \), to measure the values of different distributions for method \( i \). Invocation (or call) between methods are two types of asynchronies and sequential. If by distributing a program, two methods of the program distribute in two different machines, calls between those methods will turn into asynchronies; and in sequential call, two methods of the program are placed on the same machine. Considering of communication time, our method considers two asynchronies and sequential mode for each call; to determine which mode (sequential or parallel) can reach a maximum speed up. To estimate the speed-up, the execution time of all instructions should be estimated. The execution time of all instructions, except the nested calls, can be computed by the existing methods [16-17]. The existing methods cannot be applied easily to calculate the execution time of nested calls because the execution time of a caller method is depending on the fact that the calls inside it are carried out in a sequential or asynchronous manner. For example, consider Listing 1. In the Listing 1, in the time \( t_{1} \), the current method (caller method) will continue to work in a non-stop manner until reaching the use point of the results of a callee method. We call these points’ synchronization points [18] and is shown by \( S \). So, one method continues to work after calling a method from a remote locations (other distributed segments) and waits for a call response only when requires that response. As shown in Listing 1, the level of concurrency in executing the caller and the callee methods depends on the time interval between the call point and use point of the call results. The problem is the estimation of this interval time. As shown in Listing 1, there may be other calls between the call point and use point and the execution of these calls can be either synchronous or asynchronous. LISTING 1. Several nested calls ``` Method m( ) { Some statements // t0 Call R Some statements // t1 Use R // S Some statements // t2 } Method R( ) { Some statements // t3 Call P Some statements // t4 Use P // S Some statements // t5 } Method P( ) { Some statements // t6 } ``` 3.1.1 Estimated execution time for sequential mode In Listing 1, considering methods \( m \), \( R \) and \( P \), if all of them executed sequentially (or synchronously), the estimated execution time will be calculated as follows: (14) \[ PD_{m}^{\text{sequential}} = t_{0} + t_{3} + t_{6} + t_{4} + t_{5} + t_{1} + t_{2} . \] We can write above relation for Listing 1 in the recursive form and expand it for the nested call with any depth. (15) \[ PD_{m}^{\text{sequential}} = t_{0} + PD_{R}^{\text{sequential}} + t_{1} + t_{2} . \] (16) \[ PD_{R}^{\text{sequential}} = t_{3} + PD_{P}^{\text{sequential}} + t_{4} + t_{5} . \] (17) \[ PD_{P}^{\text{sequential}} = t_{6} . \] Generally, for the sequential call, estimated execution time relation, is as relation: (18) \[ PD_{m}^{\text{sequential}} = \sum t_{i} + PD_{R}^{\text{sequential}} . \] 3.1.2 Estimated execution time for asynchronous mode Now we calculate the estimated execution time when methods are executed parallel (or asynchronously). See again Listing 1. If methods \( m \), \( R \) and \( P \) are executed asynchronously, the estimated execution time will be calculated as follows: \[ PD_{\text{asynch}}^{m} = t_0 + t_1 + I_{\text{init}} + \sum t_i + \sum I_{\text{init}}, \] \[\max (PD_{R}^{\text{asynch}} - t_i + C_t + I_{\text{init}}, 0) + t_2. \] \[ PD_{R}^{\text{asynch}} = t_3 + t_4 + T_{\text{init}} + \sum t_i + \sum I_{\text{init}}, \] \[\max (PD_{P}^{\text{asynch}} - t_i + C_t + I_{\text{init}}, 0) + t_5. \] \[PD_{P}^{\text{asynch}} = t_6. \] \[C_t\] is the communication time and \(I_{\text{init}}\) shows the preparation time for doing remote call. Generally, the estimated time relation for the parallel (or asynchronous) is calculated as follows: \[ PD_{m}^{\text{asynch}} = \sum t_i + \sum a_i \cdot PD_{I_i} + \sum (1 - a_i) \times (I_{\text{init}}, + \max ((PD_{I_i} + C_t) - t_i + I_{\text{init}}, 0)). \] In the above relation, depending on the call to be synchronous or asynchronous, the value of \(a_i\) is considered as 1 and 0, respectively. The goal is to determine \(a_i\), so that this minimizes \(PD_{m}\). In the relation (23), the communication time is \(C_t\) and \(t_i\) is the estimated time between the call point of \(I_i\) and the synchronization point of \(S_i\) (use point). For example, to obtain \(PD\) for Listing 1, we need to combine the estimated times for the asynchronous (relation 22) and sequential execution (relations 15-17) as follows: \[ PD_{m} = t_0 + a_1 \cdot PD_{R} + t_1 + \sum a_i \cdot PD_{I_i} + \sum (1 - a_i) \times (I_{\text{init}}, + \max ((PD_{I_i} + C_t) - t_i + I_{\text{init}}, 0)) + t_2 + (1 - a_1) \times (I_{\text{init}}, + \max ((PD_{R} - t_1 + C_t + I_{\text{init}}, 0)) + t_2). \] \[ PD_{R} = t_3 + a_2 \cdot PD_{P} + t_4 + \sum a_i \cdot PD_{I_i} + \sum (1 - a_i) \times (I_{\text{init}}, + \max ((PD_{P} - t_4 + C_t + I_{\text{init}}, 0)) + t_5, \] \[GTE_{P} = t_6. \] In relation 24, the aim is to determine \(a_1\) and \(a_2\) in a way to minimize \(PD_{m}\), \(PD_{R}\), and \(PD_{P}\). Listing 2. A sample program code ``` Class A { Public void m() { // some statements T1 B = new B(); int r1 = b.m(); print (r1); // S1 C c=new C(); int r2= c.n(); D d=new D(); int r3= d.p(); // some statements T2 if (r2==1) {...} // S2 // some statements T3 F f=new F(); int r4= f.g(); If(r1>r2 && r1>r3 && r1>r4) {...} // S3 and S4 // some statements T4 } // class } ``` Class B extends A { static int m() { // some statements T5 } } // Class Class C extends A { static int n() { // some statements T6 } } // Class Class D { int p() { D d=new D(); int r=d.p(); Print (r); // S5 F f=new F(); int r1= f.g(); If(r>r1) {...} // S6 } } // Class Class F { // some statements T7 } // Class Considering the program code in the Listing 2, \(PD_{A,m}\) can be written as (25). The aim of PD relations in (25) is to determine $a_1$, $a_2$, $a_3$, $a_4$ and $a_5$ in a way to minimize $PD_{(A,m)}$, $PD_{(B,m)}$, $PD_{(C,n)}$, $PD_{(D,p)}$ and $PD_{(F,g)}$. We use the Dantzig's simplex algorithm [20] to determine the binary values of $a_i$ (for synchronous call the value of $a_i$ is considered as 1 and for asynchronous calls, the value of $a_i$ is considered as 0). Simplex method is a popular algorithm for linear programming. Then, after determining $PD$ for methods $m$, $n$, $p$ and $g$, we make DTMC for the program of Listing 2 and then we compute the potential of distributability (using relation 24) for each method and then of course we will determine expected performance (relation 12). Also, the sequential execution time of the program is calculated as well. Finally, the speedup is calculated by dividing the sequential time to the expected performance. For relations (25), the communication overhead is considered as 1 second and $T_1$, $T_2$, $T_3$, $T_4$ and $T_5$ (execution time of non-call statements) are considered as 40, 35, 45, 50 and 20 seconds. Table 1 shows the expected distributed potential (using relation 13), sequential, and speed-up execution times for Listing 2. Since speed-up is bigger than one, this indicates that the program is capable of parallel execution; i.e., the parallel execution of the program is faster than the sequential execution of the program. ### Evaluation Result In this section, we evaluate the performance of the proposed method. We want to determine when the speed-up achieved by our method is greater than one; the actual execution will speed up. For achieve this goal, we use jDistributor [2] tool. jDistributor is a tool for automatic distribution of the sequential program on the homogeneous distributed systems using the Java Symphony middleware [19]. The algorithm used in the jDistributor is a hierarchical clustering method and its goal is to find an appropriate clustering for distribution. We use the well-known travelling salesman problem (TSP) for evaluating the proposed method. We compute $PD_{sequence}$ and $PD_{async}$ from source code. We then predict from $PD$ relation, the estimated time of the parallel and sequential execution for different graph nodes and then calculate speed-up by them. Afterwards, we distribute the $TSP$ on the network including three computers by use of the jDistributor tool and then we calculate the parallel and sequential execution times. The results are shown in Table 2. ### 5 Conclusion In this paper, we introduced a new approach to specify whether the source code is distributable or not, before the distribution. For achieve this goal, by considering asynchronous and sequential calls, a mathematical relationship was proposed to measure different distributions values from the same program code. Then, we model the software’s method invocations by Discrete Time Markov Chains (DTMC). DTMC and its properties and proposed mathematical relationship can determine whether or not the source code distribution capabilities on homogeneous processors. 5.1 Future Work We plan to extend and improve this work as follows: Our aim is to propose an algorithm, which attempts to improve the speed-up as much as possible in the distribution environments by reordering instructions at the compilation time. Therefore, it attempts will be made to increase distance between the caller points to its use point using the techniques known as instructions scheduling, for increase concurrent time of caller and callee methods as much as possible. References
{"Source-Url": "https://journals.umcs.pl/ai/article/download/3413/2607", "len_cl100k_base": 6106, "olmocr-version": "0.1.48", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 43369, "total-output-tokens": 7908, "length": "2e12", "weborganizer": {"__label__adult": 0.00037288665771484375, "__label__art_design": 0.0003108978271484375, "__label__crime_law": 0.000415802001953125, "__label__education_jobs": 0.0011053085327148438, "__label__entertainment": 8.26716423034668e-05, "__label__fashion_beauty": 0.00016880035400390625, "__label__finance_business": 0.00038552284240722656, "__label__food_dining": 0.0004112720489501953, "__label__games": 0.0006041526794433594, "__label__hardware": 0.001316070556640625, "__label__health": 0.0008368492126464844, "__label__history": 0.0003108978271484375, "__label__home_hobbies": 0.0001348257064819336, "__label__industrial": 0.0005540847778320312, "__label__literature": 0.0003211498260498047, "__label__politics": 0.000293731689453125, "__label__religion": 0.00048470497131347656, "__label__science_tech": 0.08551025390625, "__label__social_life": 0.00010728836059570312, "__label__software": 0.00707244873046875, "__label__software_dev": 0.89794921875, "__label__sports_fitness": 0.000385284423828125, "__label__transportation": 0.0007586479187011719, "__label__travel": 0.00025725364685058594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27158, 0.0332]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27158, 0.56778]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27158, 0.87191]], "google_gemma-3-12b-it_contains_pii": [[0, 4116, false], [4116, 9297, null], [9297, 12926, null], [12926, 17011, null], [17011, 19983, null], [19983, 23073, null], [23073, 27158, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4116, true], [4116, 9297, null], [9297, 12926, null], [12926, 17011, null], [17011, 19983, null], [19983, 23073, null], [23073, 27158, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27158, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27158, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27158, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27158, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27158, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27158, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27158, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27158, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27158, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27158, null]], "pdf_page_numbers": [[0, 4116, 1], [4116, 9297, 2], [9297, 12926, 3], [12926, 17011, 4], [17011, 19983, 5], [19983, 23073, 6], [23073, 27158, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27158, 0.0]]}
olmocr_science_pdfs
2024-11-26
2024-11-26
98d13fe06b0833e8548dd174ec1af444f1bb32c3
Agent based Framework for QoS Measurement applied in SOA A uniform Approach based on a QoS Meta Model Andreas Hausotter, Arne Koschel University of Applied Sciences & Arts Hannover Faculty IV, Department of Computer Science, Hannover, Germany email: Andreas.Hausotter@hs-hannover.de email: Arne.Koschel@hs-hannover.de Johannes Busch, Markus Petzsch, Malte Zuch University of Applied Sciences & Arts Hannover Faculty IV, Department of Computer Science, Hannover, Germany email: Johannes.Busch@stud.hs-hannover.de email: Markus.Petzsch@stud.hs-hannover.de email: Malte.Zuch@hs-hannover.de Abstract—Nowadays, enterprises are faced with a variety of major challenges, such as the cut-throat competition in a global market, a decreasing customer loyalty, and the strategic adjustment moving from a product centric perspective to a customer centric perspective. Therefore, businesses need to change their operational processes in a flexible and agile manner to keep their competitive edge. A Service-oriented Architecture (SOA) may help to meet these needs. As the application landscape of enterprises is inherently heterogeneous and highly distributed it is a great challenge to provide services with a certain quality. This is particularly the case when services are requested externally via the web. Therefore, quality of service (QoS) measurement and analysis is a crucial issue in Service-oriented Architectures. As the key contribution of this paper we present a generic SOA Quality Model (SOA QM) based on the measurement standard ISO/IEC 15939, a SOA Information Model (SOA IM), and an architectural concept of a QoS System. The SOA IM is an XML-based specification for the measurement to be performed. The QoS System provides an execution platform for the SOA IM, based on a Complex Event Processing (CEP) approach and guarantees minimal impact on the SOA environment. The concepts are explained in detail using a standard process of the German insurance domain. Keywords—Service-oriented Architecture (SOA); Quality of Service (QoS); Measurement Process; Complex Event Processing (CEP). I. INTRODUCTION Distributed IT-systems are commonly used in today’s companies to fulfill the needs of agility and scalability of their business processes to manage the highly variable demand of the market. Typical scenarios are real time logistics and delivery, just in time supply chain management and in general, handling services in real time to fit market demands. The latter is commonly used within the finance and insurance industry during their internal computation of risk and money management and for their external customer services, like proposal calculations (including the current market conditions). Especially the external services must have a high quality in terms of time behavior. Google has shown that a latency of 100 ms up to 400 ms causes an impact of -0.2 % up to -0.6 % concerning the daily usage of web services by the customers [1]. The integration of those services to run business processes in a stable way, fulfilling the varying demand of the market, is commonly realized with Service-oriented Architectures (SOA). Those architectures integrate (micro)-services within distributed systems to run business processes with a high capability in terms of agility. Especially the distribution of the services over several systems allows to scale with the market demands. Distributing and handling several services is a common concern of the insurance industry. But an increasing distribution and more complex business processes will only gain more agility with SOA, if the distribution of the services over several systems is realized in a reasonable way. For getting the required control of the distribution of those services, a measurement system is required. Measuring the general QoS in distributed systems is part of the motivation of this work and is explained in detail in the next subsection. The subsection after the discussion of the general motivation will show the contribution to the general problem in measuring the QoS in SOA within the application scenario of the insurance industry, explained in section III. A. Motivation In many cases, it is not foreseeable to forecast, how much computing power and bandwidth the infrastructure needs to host the allocated services within the distributed computing system. Beside these design decisions of the infrastructure, there is a further problem in allocating the services to the right locations within the distributed system. This allocation will influence how much bandwidth and calculation power is available for the services and how many services will share identical resources during the same time. So if several services will use the same part of the infrastructure, this could lead to increasing latencies over the whole system, resulting in an unfavorable time behavior for the users. Especially if some services are requested with intense demands of the market, latencies could rise in an unpredictable manner. Such a scenario is typical for the German insurance industry. At the end of the year, millions of users are able to switch their insurance contracts and will request therefore designated online services. The general demand is not foreseeable and the intensely interaction between the insurance industry and the finance industry requires a high quality of those services. Especially the historically low interest rates in today’s market provokes fast changing business models and the need of a fast adoption to new business processes and the ability to offer services in a high quality to fulfill the external user demands and the internal interaction within the finance industry. To fulfill those demands, distributed systems with SOA will benefit from an across boarder measurement of the quality of those services, especially in terms of latency. Such a measurement system is the contribution of this work and is explained in the following subsection. B. Contribution The need for a new development of a flexible measurement system is influenced by the limitations of common solutions. The scenario of this work is based on a German insurance company, which already uses Dynatrace as a measurement solution [2]. The partner from the insurance industry is currently restructuring and modernizing his business processes and therefore, he needs a more flexible and generic approach to integrate an external measurement system for monitoring and analyzing the time behavior of his services. Additionally, a more detailed analyzer component was required to process the measured data. So on the one hand, the approach has to be integrated in a generic way with minimal interaction points within the SOA of the partner from the insurance industry to guarantee a simple integration during the continuous development process. But on the other hand, the solution should offer a flexible and detailed analyzer component. This article will present our currently ongoing applied research work. Since it is still "work in progress", we will mostly focus on measurement concepts and an adequate measurement model here. We combine this with an initial description of the main insurance application scenario used by us. More technical details on our actual prototypic implementation as well as QoS measurement results, will be presented in future work. The required solution was defined by the following: • generic approach to generate the measurement system, • automatic integration of the measurement system in the existing SOA, • lose couplings within the existing SOA, • flexible agent based approach, • technology independent approach using standards (XML), • individual and customizable analyzer component. As stated above, beside technical concepts we will also present some details from our mainly utilized application scenario, which is based upon the ideas from the "Check 24" process. Within this process different offerings for the same kind of insurance are compared. The offerings typically origin from several insurance companies. They are, for example, different offerings for car insurances. Based on certain input parameters, the end user gets eventually different insurance offers by this process. The proposal service used by "Check 24" is a common service throughout the German insurance sector is implemented by various insurance companies. This service can be called externally by applications such as "Check 24" through a common interface given by a so called "BiPro specification". BiPro is widely used throughout the German insurance sector and the availability of these services has a significant impact on competitiveness. Internally the proposal service is, for example, used in the process "Angebot erstellen" ("create proposal") of the general German "Versicherungsanwendungsarchitektur (VAA)” (cf. [32]), which describes a set of standardized insurance processes working within a generalized “insurance application architecture”. Our project partner has implemented a similar process for it’s own agent respectively customer portal. The remainder of this paper is structured as follows: In the next Section II we discuss some related work. Section III describes our application scenario in some detail. In Section IV and Section V our general Quality of Service (QoS) measurement model and concept are described. Eventually Section VI concludes this paper and gives some outlook to future work. II. PRIOR AND RELATED WORK In prior work, we already discussed several aspects of the combination of SOA, Business Process Management (BPM), Workflow Management Systems (WfMS), Business Rules Management (BRM), and Business Activity Monitoring (BAM) [16][17][15] as well as Distributed Event Monitoring and Distributed Event-Condition-Action (ECA) rule processing [20][21]. Building on this experience, we now address the area of QoS measurement for combined BRM, BPM, and SOA environments within the (German) insurance domain context. Work related to our research falls into several categories. We will discuss those categories in turn. General work on (event) monitoring has a long history (cf. [12][13] or the ACM DEBS conference series for overviews). Monitoring techniques in such (distributed) event based systems are well understood, thus such work can well contribute general monitoring principles to the work presented here. This includes also commercial solutions, such as the Dynatrace [2] system or open source monitoring software like, for example, the NAGIOS [14] solution. In those systems there is however, generally not a focus on QoS measurement within SOAs. Also, they usually do not take application domain specific requirements into account (as we do with the insurance domain). Active DBMS (ADBMS) offer some elements for use in our work (see [18][19] for overviews). Event monitoring techniques in ADBMSs are partially useful, but concentrate mostly on monitoring ADBMS internal events, and tend to neglect external and heterogeneous event sources. A major contribution of ADBMSs is their very well defined and proven semantics for definition and execution of Event-Condition-Action (ECA) rules. This leads to general classifications for parameters and options in ADBMS core functionality [19]. We may capture options that are relevant to event monitoring within parts of our general event model. QoS aspects are handled within ADBMS, for example, within the context of database transactions. However, since ADBMSs mostly do not concentrate on heterogeneity (and distribution), let alone SOAs, our work extends research into such directions. The closest relationship to our research has work, which directly combines the aspects QoS and SOA. Since about 2002 several articles fall into this category. However, in almost all known articles the SOA part focuses on WS-* technologies. This is in contrast to our work, which takes the operational environment of our insurance industry partners into account. Examples of WS-* related QoS work include QoS-based dynamic service bind [26][27], related WS-* standards such as WS-Policy [22], and general research questions for QoS in SOA environments [23]. Design aspects and models for QoS and SOA are, for example, addressed in [28][24][33][25][26]. SOA performance including QoS in [34], and monitoring for SOA is discussed in articles such as [30][31][29][35]. III. APPLICATION SCENARIO Customers are using online platforms to compare the conditions and proposals offered by different companies. The online platform check24.com allows customers to compare different insurance proposals. Therefore, the insurance companies need to respond to those requests to be aware for potential customers on such platforms. The underlying scenario for this work is a service for calculating individual proposals for such online platforms. This scenario is automatically requested by the online customer information platform and needs to respond in a timely manner. The business process for calculating the proposal follows four steps: - check input parameters for plausibility, - call all additional relevant services to get required data, - calculate the proposal based on internal business rules, - deliver the proposal to the requesting online platform. The partner from the insurance industry has already developed a distributed system to create and run such business processes. This system uses the approach of SOA and integrates various micro-services located across several locations. Measuring the time behavior is a feasible approach to maintain the overall system and scale it to changing market demands to fulfill the required quality of such services (QoS). The distributed system is designed with the concept illustrated in Fig. 1. The system part alpha is the enterprise service bus (ESB) of the system, which is responsible to integrate the business processes with further applications and services. Those business processes are parameterized by specific business rules, stored in a business rule database. The communication with this business rule database is realized via web service calls. So in general, alpha is the central communication component of the system. The system part beta is the current process engine to run the business processes and is connected via JMS with alpha. Those business processes are influenced by the stored business rules and the business process data, which are stored in a separated database. This distributed system defines the scenario where several services are parameterized, called and integrated (via alpha) over several locations. The generic and XML-based measurement concept of this work will use this scenario to measure QoS-Parameters, especially the time behavior of services. The specific measurement model is described in the next section. IV. MEASUREMENT MODEL The assessment of the QoS in Service-oriented Architectures is based on a SOA Quality Model (SOA QM), which combines characteristics and sub-characteristics in a multilevel hierarchy. For this purpose we adjusted the ISO/IEC-Standard 9126 to meet the SOA-specific requirements. Fig. 2 illustrates the characteristics, sub-characteristics and relationships between these concepts. In our research work we will focus on Time Behavior, which contributes to Efficiency. Although ISO/IEC 9126 was revised by the ISO/IEC-Standard 25010 (Systems and software engineering – Systems and software Quality Requirements and Evaluation (SQuaRE) – System and software quality models, cf. [9][6]) we use ISO/IEC 9126 as a starting point because of it’s high degree of awareness in German-speaking countries (cf. [7]). Moreover, the German version of the ISO/IEC 25000 series has been prepared by the German Institute for Standardization (DIN) but is not yet available (cf. [11]). Instead of applying the quality metrics division of SQuaRE (i.e. ISO/IEC 2503x), our approach is based on the comprehensive ISO/IEC-Standard 15939 (cf. [9]). The basic model, as to be found in similar form in the contribution of Garcia et al. ([10]) has been aligned and extended by quality requirements, quality models, and some system components. In the following subsections we describe the main concepts of our SOA Measurement Information Model (SOA MM) as shown in Fig. 4. A. Information Need and Information Product The determination of the QoS in a SOA is always demand-driven, since both the specification (’What and how should be measured?’) as well as the execution of the measurement itself and the subsequent interpretation of the results can cause a significant organizational and technical effort. Therefore, first of all the Information Need with objectives, potential risks and expected problems is to be defined and documented properly. In terms of the application scenario presented in section III the objective is to assess the performance of the business process for calculating the offer in order to identify and resolve problems in time. B. Core measurement process The Information Product is the result of the execution of the Core Measurement Process as depicted in Fig. 3 (cf. [8]). The Information Need provides the input for the subprocess Plan the Measurement Process (planning stage), the subprocess Perform the Measurement Process (execution stage) generates the output, i.e., the Information Product. The process goal is to satisfy the Information Need. All concepts presented below directly or indirectly contribute to the Information Product. C. Concepts of the planning stage In the focus of our research work are SOA Services whose QoS is to be investigated. For this purpose, Quality Attributes are measured. In this context, the Measurable Concept outlines in an abstract way, how the attributes values are determined to satisfy the required Information Needs. In doing so, it references one or more sub-characteristics of the SOA QM. For the application scenario described in section III, the process performance is to be determined first and then evaluated. The corresponding Measurable Concept is the calculation of the processing time. To do this, instantiation of a process and termination of the process instance are to be determined. The process identification represents the Quality Attribute to be measured, and the sub-characteristic, referenced by the Measurable Concept, is the Time Behavior. In order to implement the Measurable Concept and to perform measurements of attributes, first of all Measures are to be specified. A Measure assigns each Quality Attribute a value on a Scale of a particular Type. The ISO/IEC-Standard 15939 provides 3 different types of Measures, namely Base Measures, Derived Measures, and Indicators respectively. A Base Measure specifies by its Measurement Method how the value of a Quality Attribute is to be determined. It’s always atomic and therefore independent of other Measures. A Derived Measure uses one or more Basic Measures or other Derived Measures, whilst the Measurement Function specifies the calculation method and thus the combination of the Measures used. For the application scenario illustrated in section III, the Basic Measures process instantiation $t_{inst}$ and process stance termination \( t_{\text{term}} \) are specified. The identification of the processes instance \( \pi ID \) represents the Quality Attribute measured by \( t_{\text{inst}} \) and \( t_{\text{term}} \). As the Measurement Method, we select the time of the start and end event respectively. The Derived Measure processing time of the instance \( T_{\text{Proc}} \) will be calculated by the Measurement Function \[ T_{\text{Proc}}(\pi ID) = \Delta t = t_{\text{term}}(\pi ID) - t_{\text{inst}}(\pi ID). \] Finally, an Indicator is a qualitative evaluation of Quality Attributes, which directly addresses the issue raised in the Information Needs. Indicators always use a nominal scale with qualifying values and thus show if necessary action or the need for further root cause analysis. An Indicator is derived from other Quality Measures, i.e., Base and Derived Measures, and Indicators. The combination of the Quality Measures used and the method of calculation is based on an Analysis Model in conjunction with Decision Criteria using thresholds and target values. For the application scenario illustrated in section III, the indicator adequacy of the processing time of a process instance \( SLoT_{\text{Proc}}(T_{\text{Proc}}) \) according to table I: <table> <thead> <tr> <th>( T_{\text{Proc}} )</th> <th>( SLoT_{\text{Proc}} )</th> </tr> </thead> <tbody> <tr> <td>( \in (0, 3000,\text{ms}] )</td> <td>high</td> </tr> <tr> <td>( \in (3000, 7000,\text{ms}] )</td> <td>medium</td> </tr> <tr> <td>( \in (7000, \infty) )</td> <td>low</td> </tr> </tbody> </table> **D. Concepts of the execution stage** After the concepts of the planning stage have been presented, now those of the execution phase will be explained briefly (subprocess Perform the Measurement Process, depicted in Fig. 3). Section V will discuss their conceptual implementation more detailed. The actual measuring procedure, i.e., the execution of the instructions for determining the value of a Quality Attribute, is called Measurement. Hereby, Measurement Results are created, collected in a container, namely Data, which is inserted into a Data Store. The measurement system comprises different supporting software components, which are conceptually presented in section V. The QoS Measurement performs the instructions specified in the Measurement Method or Measurement Function respectively, to generate the Measurement Results for further processing. The QoS Analyser performs the statistical analysis and evaluation of the collected data and creates the Information Product. The QoS Reporting makes the Information Product available to the Measurement User (cf. Fig. 3) **E. QoS Measurement Information Model** We designed a domain-specific language to specify the values of the concepts introduced above according to the Information Need. This specification document is referred to as QoS Information Model (QoS IM). The aim of this approach is to automate the measurement process by the generation of artifacts required by the QoS system to execute a measurement. The QoS IM consists of an abstract and a concrete section. In the abstract section, the concepts of the Planning Stage and partly the Execution Stage are specified. In the concrete section, the implementation specific definitions are done. Since our QoS-System is based on a complex event processing (CEP) approach, the specification of events, agents and rules is subject of this section. A sophisticated XML Schema was developed to realize the domain-specific language. We opted for XML as a universally accepted standard that is highly flexible, platform and vendor independent and supported by a wide variety of tools. In a follow-up project an XText-based tool will be developed that generates the (XML) QoS IM from a (XText) source code. Its semantic model is shown in Fig. 4. The following rules for modeling apply: - Concepts are mapped to XML elements (graphically represented by UML classes). - Details of a concept are mapped to XML attributes of the owning element (graphically represented by UML instance variables). - If possible, relationships between concepts are mapped to element hierarchies (graphically represented by UML associations). - Otherwise they are mapped to constraints (i.e. keyrefs) (graphically represented by UML dependencies). **V. MEASUREMENT CONCEPT** In section IV, a QoS IM based upon a SOA QM is described. To execute a specific QoS IM (and thus subprocess "Perform the Measurement Process") an execution platform is needed. This platform and the underlying QoS architecture is given in this section. First reasons for choosing this specific architecture are discussed shortly. Furthermore an overview is shown, detailing in the central agent concept and CEP. **A. Design Decisions** As described above the goal of the measurement concept is to provide the execution platform for a specific QoS IM. Therefore basic design criteria for the measurement concept are derived from the QoS IM. Furthermore quality requirements are given, which also have to be considered in the architecture design. These criteria are: - measurement of Quality Attributes as described by QoS IM, - flexibility of measurement and computation, - low impact (modification, performance, etc.) onto SOA components. The proposed Measurement Concept is based upon a general architecture given in [3]. The basic idea is to separate the measurement (e.g. sensors, agents, etc.) and "analysis and statistics" functionality into different modules. This separation opens the opportunity to cater each module to their specific functional and quality requirements. Overall the given general architecture already fulfills the requirement to measure Quality Attributes and provide the needed evaluations to produce Measurement Results and Information Products. The measurement module has to provide the QoS System with information about the observed service. To provide the needed flexibility a sensor has to be placed into it. To keep the impact onto the SOA at a low level an agent based approach was chosen. Agents capsule the needed parsing and computation and thus can be easily integrated into arbitrary SOA modules. Furthermore minimizing the performance impact (through threading, non-blocking, etc.) can be integrated into the agents. The "analysis and statistics" module does not have these strict requirements on performance impact. Flexibility of computation and measurement execution is the main quality requirement. Thus a platform approach was chosen. Basically artifacts generated through the QoS IM are placed into the QoS platform and executed. **B. Overall system architecture** On a high level the QoS system splits the measurement agents and further processing (QoS platform) into different components. This approach allows to easily split these components into different processes to comply to the quality requirements. While the measurement agents (encapsulating the agent concept) represents the client component, the server component is represented by the QoS platform and contains the CEP engine and further analysis processing. Fig. 5 shows a high level overview of important components and their relationships. The general purpose of the measurement agents is to emit specific events based on the defined Base Measures. As described in section IV events are emitted, e.g., for process instance instantiation/termination. In general, concepts for agent implementation can be categorized by agent location and time of execution (cf. [5] and [4]). To measure a specific process instance agents can be placed into corresponding SOA service calls thus measurement agents are only logically placed into the QoS System component. One and currently used approach is to use the concept of interceptors, which offers a low modification impact and can deliver precise Measurement Results. The QoS Platform consist of several components, most notable the QoS Measurement and QoS Analyzer. In general the purpose of these modules are to collect, clean and compute... Before any event is given to the Measurement Method, it will be handled by the control module. Purpose of this module is event routing, general cleaning steps and an optional filter step. Cleaning (or formatting) events in the analyzer is needed because measurement agents are placed in the monitored system, thus shall minimize their performance impact. The Measurement Method is implemented as a CEP rule executed by the engine and emits complex events for further near real-time processing and long term analysis. The QoS Analyzer module provides a basis for statistical analysis and evaluations. Every complex event is stored into a Data Store implemented as a relational database. The different analysis and evaluations defined by Derived Measures and Indicators are implemented through SQL and plain Java. Furthermore the module provides an interface to the computed Information Product. C. Applying the described measurement concept In Fig. 6 the given measurement concept (QoS platform) is applied onto the application scenario thus providing the missing link between the QoS IM and the insurance based application scenario. In this example a simplified scenario is used consisting only of an external "Check24" mock-up service (representing a simple consumer), the central ESB and the proposal service (which represents the producer). The task of the measurement model and thus the concept is to measure the processing time of this service and to compute the Information Product for further evaluations. To measure this service the measurement agents, defined through base measures, are placed directly into the ESB. This offers a measurement independent of service location and different load balancing scenarios. To minimize the integration effort functionality given by Spring Integration is extensively used (especially the interceptors for message queues). In this simplified example base measures (and thus the agents) only determine service call start / end times and announces these to the QoS platform. Furthermore the agents try to minimize their performance impact by using non-blocking techniques and performing only necessary parsing steps (e.g. service call id’s, etc.). These will be shown in detail in further publications. As described above the QoS platform performs further cleaning and processing steps to compute the QoS IM indicators \( SLoT_{Proc}(T_{Proc}) \) and provides these to downstream systems (e.g. reporting, presentation, load balancing, etc.). VI. CONCLUSION AND FUTURE WORK The presented approach for monitoring a distributed SOA environment is a promising path to take: The SOAQM is aiming to follow the ISO/IEC-Standard 15939 (cf. [8]), which enables a wide range of use cases. The Measurement Concept outlines an execution platform for the specific QoS IM, which should cause minimal impact on the SOA environment. The separation of Measurement Agents and QoS-Analyzer allows lightweight agents on the one hand and a very capable analyzer component on the other hand. The still ongoing work of applying the QoS System to an application scenario relevant to our partner in the insurance industry (the "Check 24 process"), will provide evidence of the practical usability of the created framework. In this paper the framework and the corresponding platform are applied onto a basic, business relevant scenario (the proposal service). Furthermore it is planned to apply these technique to the more complex process "Angebot erstellen" ("create individual proposal") of the VAA thus implementing a more complex scenario. It is expected that the monitoring system will help to discover potential bottlenecks in the current system design of our partners distributed services and therefore creating high value in the process of solving these issues. In future work, the actual measurement and analysis of the results are to be done. It is also planned to apply these results onto cloud based environments. REFERENCES
{"Source-Url": "http://www.thinkmind.org/download.php?articleid=service_computation_2017_2_20_10015", "len_cl100k_base": 6117, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 24187, "total-output-tokens": 9024, "length": "2e12", "weborganizer": {"__label__adult": 0.0004570484161376953, "__label__art_design": 0.0008320808410644531, "__label__crime_law": 0.0007162094116210938, "__label__education_jobs": 0.002025604248046875, "__label__entertainment": 0.00016999244689941406, "__label__fashion_beauty": 0.00027942657470703125, "__label__finance_business": 0.006137847900390625, "__label__food_dining": 0.00041365623474121094, "__label__games": 0.0009007453918457032, "__label__hardware": 0.001712799072265625, "__label__health": 0.0010690689086914062, "__label__history": 0.0005397796630859375, "__label__home_hobbies": 0.00014293193817138672, "__label__industrial": 0.0009632110595703124, "__label__literature": 0.0004773139953613281, "__label__politics": 0.0005025863647460938, "__label__religion": 0.0004472732543945313, "__label__science_tech": 0.2308349609375, "__label__social_life": 0.0001308917999267578, "__label__software": 0.0306854248046875, "__label__software_dev": 0.71923828125, "__label__sports_fitness": 0.0002486705780029297, "__label__transportation": 0.000789642333984375, "__label__travel": 0.0002696514129638672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38265, 0.02001]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38265, 0.16695]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38265, 0.89275]], "google_gemma-3-12b-it_contains_pii": [[0, 5315, false], [5315, 11427, null], [11427, 16610, null], [16610, 19181, null], [19181, 24731, null], [24731, 27152, null], [27152, 30776, null], [30776, 38265, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5315, true], [5315, 11427, null], [11427, 16610, null], [16610, 19181, null], [19181, 24731, null], [24731, 27152, null], [27152, 30776, null], [30776, 38265, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38265, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38265, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38265, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38265, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38265, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38265, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38265, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38265, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38265, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38265, null]], "pdf_page_numbers": [[0, 5315, 1], [5315, 11427, 2], [11427, 16610, 3], [16610, 19181, 4], [19181, 24731, 5], [24731, 27152, 6], [27152, 30776, 7], [30776, 38265, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38265, 0.03067]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
73a16ad7585c861db23a2464d91e5ed34fa876a6
Adobe Flash Catalyst CS5 Create expressive interfaces and interactive content without writing code Adobe Flash Catalyst CS5 software is an approachable new interaction design tool. Transform static artwork from Adobe Photoshop®, Illustrator®, and Fireworks® software into fully interactive projects without writing code, and publish as SWF or Adobe AIR® files. Collaborate with developers who use Adobe Flash Builder™ software, and leverage the expressiveness, reach, and consistency of the Adobe Flash Platform. New from Adobe, Flash Catalyst is built with the designer in mind, combining an intuitive user interface and toolset that will feel familiar—similar to Photoshop, Illustrator, and Fireworks—with the expressiveness, consistency, and reach of Adobe Flash technology. The result: a feature-rich interaction design tool you can successfully use to publish SWF files without writing code. Designing and building interactive applications doesn’t have to be hard, and with Flash Catalyst, it isn’t. That’s because Flash Catalyst uses a simple and intuitive menu-driven interface with easy-to-understand language. As you’d expect from a professional Adobe design tool, you always have complete control over the appearance of your artwork, and you publish your interactive content as a SWF file that displays with Adobe Flash Player 10. Flash Catalyst is developer-friendly, and writes the code for you in the background while you focus just on the task at hand—creating expressive, interactive content. Using the open-source Adobe Flex® framework as its underlying structure, Flash Catalyst helps ensure that when you have to hand more complex projects off to a developer, everything is already in place, ready to go. The intuitive Heads Up Display (HUD) in Flash Catalyst CS5 helps you transform static artwork into interactive designs, step by step, without writing code. With Adobe Flash Catalyst CS5, you can: **Use the power of the Adobe Flash Platform to create interactive content.** Rapidly create expressive interfaces and interactive content to your exacting design standards. Deliver your creative vision using the reach and consistency of the Adobe Flash Platform. Integrate video, sound effects, and dynamic media easily, and get the look you want quickly and precisely with layout tools and functions. Export projects for the web as finished SWF files or directly to a customer as Adobe AIR files that can run across platforms as desktop applications. **Explore interaction design without writing code.** Transform static Photoshop, Illustrator, or Fireworks designs into fully interactive projects without requiring any development or coding skills. Use pages and states to structure your content so you can easily control navigation and interactions, and create and edit smooth animated transitions. You can also design and preview functional data lists without involving a database. With an easy-to-use and approachable interface, Flash Catalyst enables designers to expand their skill set to include interactive projects. **Speed the design and development process.** Support for FXG, an XML-based file format, lets you share artwork between Photoshop, Illustrator, Fireworks, Flash Catalyst, and Flash Builder. When a project requires development—for example, a connection to data services—you can hand your Flash Catalyst project over to a developer who can open it directly in Flash Builder, preserving design fidelity and interactions defined by you. In addition, you can share assets and components with developers using Library Packages. **Who uses Flash Catalyst?** Flash Catalyst is an approachable new interaction design tool built for anyone who wants to create expressive interactive content without writing code. Many types of creative professionals and developers can benefit from the features in Flash Catalyst, including: **Interaction designers**, who can use Flash Catalyst to transform prototypes from Fireworks, Photoshop, or Illustrator into fully interactive projects without writing code. With the ability to create and edit smooth animated transitions and easily integrate video, sound effects, and dynamic media, they can create user interfaces and digital experiences. **Graphic designers**, who can use Flash Catalyst to transform static Photoshop or Illustrator designs into fully interactive projects without writing code. Using intuitive menu-driven functions to define interactions and to create components like buttons and scroll bars, they can create expressive microsites, portfolios, and application interfaces that clearly communicate their intended message. **Web designers**, who can use Flash Catalyst to transform prototypes from Fireworks, Photoshop, or Illustrator into fully interactive projects without writing code. Using mock data to define and preview the look and behavior of dynamic data without needing a database, they can create professional microsites, application interfaces, and website navigation elements. **Interactive designers**, who can use Flash Catalyst to transform prototypes from Photoshop, Illustrator, or Fireworks into fully interactive projects without writing code. With the ability to create and edit smooth animated transitions and integrate video, sound effects, and dynamic media easily, they can create user interfaces and digital experiences. **Web application developers**, who can use Flash Catalyst to work closely with designers and define custom components, interactions, and behaviors. With the ability to open Flash Catalyst project files right in Adobe Flash Builder software, and using SWF and Adobe AIR publishing options, they can develop rich Internet applications that can be delivered virtually anywhere. **Video editors**, who can use Flash Catalyst to integrate video content from Adobe Premiere Pro and After Effects into fully interactive projects without writing code. Using intuitive menu-driven functions to incorporate video and audio content and custom playback controls, they can assemble compelling stories, create video presentations, and pitch concepts. Top features of Adobe Flash Catalyst CS5 Fully customizable components At the core of content creation with Flash Catalyst are components—the building blocks that you use to create interactive content. These components include things like buttons, scroll bars, sliders, text fields, checkboxes, and data lists, and serve as a way for users to interact with the experience or application that you create. If you're used to laying out print pages by placing objects like text frames, picture frames, rectangles, and so on, you'll find working with components in Flash Catalyst is similar. If buttons or scroll bars don't sound very exciting to you, that's only because these elements are usually taken for granted—they appear virtually the same everywhere you look on the web or inside of dynamic applications. With Flash Catalyst, you have complete control over the appearance, or “skin,” of any component. Flash Catalyst offers two ways to customize the appearance of components: - **Start with static artwork.** Create static design comps in Photoshop, Illustrator, or Fireworks, and use them to create your Flash Catalyst project. Then, select individual elements in your design, and use the Convert Artwork To Component command to transform the static art into fully functional interactive components. - **Start with basic components.** Flash Catalyst comes with a library of fully functional basic wireframe components that you can simply add onto your page. You can customize these components using the Properties panel in Flash Catalyst, or you can take advantage of roundtrip editing capabilities and edit the components directly in Photoshop or Illustrator. This is perfect for when you want to build and test the interactions first, and then finalize the appearance afterwards. See "Roundtrip editing" on page 4 for more details. Powerful layout tools Designers who are used to having complete control over their designs and layouts will feel right at home with Flash Catalyst, which offers sophisticated interface design features similar to those found in Photoshop, Illustrator, Flash Professional, and Fireworks: - **Toolset and shortcuts.** Use familiar selection, transformation, text, shape, and magnification tools that you already know from other Adobe design applications. Keyboard shortcuts are similar as well, reducing the learning curve. - **Layers panel.** Giving you complete control over your artwork, the Layers panel clearly indicates the structure of imported Photoshop and Illustrator files, and makes it easy to define interactions for the various states of interactive components. - **Rulers, grids, and guides.** Ensure the accurate placement of objects while designing by using familiar rulers, grids, and positionable custom guides. Objects snap to these elements just as they do in other Adobe design applications. - **Properties panel.** For any selected object, you can easily adjust size, position, stroke, fill, color, and opacity settings in the Properties panel. You can even add filters like drop shadows, glows, blurs, or even specify transparency blend modes. - **Align and Arrange functions.** Designers rarely "eyeball it," and with familiar align and arrange tools, you don’t have to—it’s easy to make sure everything in your design lines up perfectly. Group artwork to organize your designs and to ensure easy selections. It’s obvious that Flash Catalyst was built from the ground up with the designer in mind—a welcome thought for any designer looking to author interactive content. **Roundtrip editing** With Flash Catalyst, you can start a project using static designs or artwork from Photoshop, Illustrator, or Fireworks. In addition, you can integrate just about any JPEG, GIF, or PNG file into your Flash Catalyst project, and then transform it into an interactive component. When you’re done, you can send it off to your client or manager for review. However, rarely do review cycles pass without change requests. With interaction design, seemingly small change requests—such as modifying the appearance of a button or a slider—can be extremely time-consuming, because the design comp is functional. In the past, you would have to start from the beginning by modifying the art in a design application and redefining all of the interactions. In Flash Catalyst, you can select a design element or component, and edit it in Photoshop CS5 or Illustrator CS5. Upon completion of the changes, the artwork is saved and automatically updated within the Flash Catalyst project. Any interaction or transitions that were applied to that element remain in place, unaffected by the artwork change. For example, you might want to change the way a button appears when a cursor passes over it, which would require a change to the Over state of the button. You can select the artwork in Flash Catalyst and choose **Modify > Edit In Adobe Illustrator CS5** to take advantage of the familiar tools and creative power found in Illustrator. This command sets the following steps in action: 1. Flash Catalyst copies the button component, and also captures a snapshot of the entire page. 2. Illustrator opens a new document and places the snapshot, at 20% opacity, on a locked layer. This allows you to see your button in the context of your entire page design. 3. The button appears in position in the Illustrator document. The Layers panel reveals that all four states of the button are present in the Illustrator file, each on its own top-level layer. Use the full power of Adobe Illustrator CS5 to edit interactive components. For more information on the new creative features in Illustrator CS5, see Adobe Illustrator CS5 What’s New. At this point, you can use the full Illustrator toolset to edit or design any of the states of the button component. Upon completing your edits, you can save the Illustrator file and return to Flash Catalyst. The design changes you made in Illustrator are updated in the button component without disturbing its structure, or any interactions or transitions that you may have already defined for it. With roundtrip editing, you are free to tweak your designs at any point in the workflow, without losing the interactions you have defined in your project. It’s never too late to make a design change to get everything just right with Flash Catalyst. **Pages and states** Flash Catalyst allows you to build an interactive user experience using concepts like pages and states, closely matching an experience that you are already familiar with from traditional print or web design, or when designing DVD/Blu-ray Disc interfaces. The Pages/States panel in Flash Catalyst makes it easy to navigate as you design, and provides a powerful, visual way to see exactly how content will look through different stages of user interaction. **Pages** Just as a brochure or a website may comprise of several pages, an interactive experience or a rich Internet application may take you from one screen to the next. In Flash Catalyst, each of these screens (such as the login screen) is defined as a page. For example, say you are designing a microsite, using layer comps in Photoshop, multiple artboards in Illustrator, or multiple pages in Fireworks to design what each page will look like. When you bring your design into Flash Catalyst, each of those pages still exist, and you can easily define buttons that a user can trigger to move from one page to the next. **States** Components such as buttons or sliders have various states. For example, a button might have four different states, representing its appearance when enabled, disabled, moused over, or clicked on. The Pages/States panel enables precise control over individual components and lets you create unique designs for specific types of interactions such as rollovers and clicks. Smooth animated transitions can easily be applied to objects, and actions can be triggered when a user interacts with the content and moves from page to page or state to state. [The Pages/States panel helps you organize your project and control various states for each component.] **Smooth animated transitions** One of the appealing aspects of the Adobe Flash Platform is the expressiveness of the graphics—instead of choppy transitions, artwork moves or changes in appearance gracefully. Designers have traditionally created smooth transitions manually using tweens, or blends between different states of artwork, and a technique called easing, which controls the speed or acceleration of an animation, requiring additional time-consuming steps to achieve the desired look. Flash Catalyst allows you to create smooth transitions with a single click in the timeline. This enables you to visually edit and create animated transitions between pages or states of components. You can specify the start and duration time in seconds, and quickly preview transitions and action sequences. With Flash Catalyst, you don’t have to manually create tweens or define motion—all of that happens automatically. Just as you define transitions such as fade and rotation in a presentation or movie, you can easily apply actions such as Fade, Move, Resize, Rotate, and Rotate 3D to any interaction in Flash Catalyst. The Smooth Transition function makes these effects all appear to ease in and out with that professional touch. Design-time data Creating interactive experiences or dynamic applications can present new and unfamiliar challenges for designers. For example, if you’re simulating the connection to a database, it’s difficult to see the end result of a design until it’s running and connected to a back-end system. Say you want to create an interactive experience that simulates scrolling through a list of suggested restaurants in a city. Similar to placeholder text or FPO (For Position Only) images in print design, Flash Catalyst allows you to use mock data such as text or images without having to actually connect to a live server. Flash Catalyst uses the term design-time data to refer to this capability. Through an easy-to-use panel, you can enter information into a customizable table, just as easily as you would fill out a table in InDesign or a spreadsheet in Microsoft Excel. First, you convert any artwork into a component with a variable number of columns and rows of data. Next, you connect a scroll-bar component to the list, allowing users to quickly navigate or scroll through the list items (in this example, restaurants located in a city). Easily add transitions and actions to control the behavior of each item in the list. This enables you to control the look and behavior of the complete user experience even when real data isn’t available. Just as important, using design-time data has additional benefits later in the workflow. A developer using Flash Builder can simply replace the design-time data with real data from a database or web service while maintaining the interactions and pixel-perfect design from Flash Catalyst. Video and dynamic media Designers are always looking for a creative edge, and adding video content can be compelling for creating interactive marketing and promotions. For video professionals, creating an online portfolio of their work gives them the ability to acquire more clients. With Flash Catalyst, you can integrate video, sound effects, and dynamic media as easily as working with static artwork—even scale or position video content as you would an image. Once you place video into a project, standard playback controls are automatically added, or you can turn any piece of artwork into a video control. Flash Catalyst dramatically reduces the time it takes to incorporate sound and video to your projects. Flash Catalyst can import FLV and F4V files like those exported from Adobe Premiere Pro or After Effects software. You can also use Adobe Media Encoder (included as a separate application) to convert just about any video file for use within Flash Catalyst. In addition to video, Flash Catalyst can also import SWF files, such as animations, that you or another designer or developer may have created with Flash Professional. Once you have them positioned in your layout, you can specify actions that include Play, Go To Frame And Play, Pause, and Stop, for full control over the playback of the SWF file. Publish as SWF and AIR files With Flash Catalyst, you can leverage the reach of the Adobe Flash Platform to deliver content easily across the web and the desktop. Once you’ve completed your project, you can export it as a finished SWF file that can be viewed with the popular Adobe Flash Player 10 or as an Adobe AIR file that can run across platforms as a desktop application. This makes it easy to deliver a finished project to a customer or to publish on the web, taking advantage of all of the expressiveness, consistency, and reach that Flash Player and AIR provide. Royalty-free sound effects Resource Central,* accessible from Adobe Soundbooth® CS5 (available separately), gives you access to over 9,000 sound effects you can import into your Flash Catalyst compositions. Flash Catalyst and Soundbooth are also available together in Adobe Creative Suite 5 Production Premium. *See the last page for details and limitations related to all Adobe online services. **System requirements** **Windows** - Intel® Pentium® 4 or AMD Athlon® 64 processor - Microsoft® Windows® XP with Service Pack 2, Windows Vista® Home Premium, Business, Ultimate, or Enterprise with Service Pack 1, or Windows 7 - 1GB of RAM (2GB recommended) - 1GB of available hard-disk space for installation, additional free space required during installation (cannot install on removable flash-based storage devices) - 1024x768 display (1280x800 recommended) with 16-bit video card - DVD-ROM drive - Java® Runtime Environment 1.5 (32 bit) or 1.6 - Broadband Internet connection required for online services* **Mac OS** - Intel® processor - Mac OS X v10.5.7 or v10.6 - 1GB of RAM (2GB recommended) - 1GB of available hard-disk space for installation, additional free space required during installation (cannot install on a volume that uses a case-sensitive file system or on removable flash-based storage devices) - 1024x768 display (1280x800 recommended) with 16-bit video card - DVD-ROM drive - Java® Runtime Environment 1.5 or 1.6 - Broadband Internet connection required for online services* With Flash Catalyst, you can publish accessible content with the following options at the click of a single button: - **Deploy To Web.** Publish all necessary support files to upload your content to a web server. Content can run within its own HTML page, or you can integrate it into existing web pages (for example, with Dreamweaver) to play back just like any embedded SWF content. - **Run Local.** Publish all necessary support files to run your content directly on your desktop. Content can run directly within your web browser. - **AIR.** Publish a single self-sufficient AIR file that can run directly on the desktop. You can use this option to send a client or a manager a quick preview of the project you’ve been working on or to deploy your content as a cross-platform desktop application. **Flash Builder integration** Flash Catalyst offers you the ability to design expressive interactive content without writing code. When a project requires development—for example, a connection to data services—you can hand your Flash Catalyst project over to a developer who can open it directly in Flash Builder, preserving design fidelity and interactions that you’ve defined. Everything you do in Flash Catalyst—from artwork creation to defining interactivity—is automatically expressed in MXML, the language of the Flex framework, behind the scenes. Flash Catalyst saves files in the FXP format—the same project file format that Flash Builder uses. This clean separation between the design and the application logic makes it easy for developers and designers to work together in an efficient and productive manner. Flash Builder 4 Standard is included with Adobe Creative Suite 5 Web Premium. **About Adobe Systems Incorporated** Adobe is the world’s leading provider of software solutions to create, manage, and deliver high-impact, reliable digital content. For more information, visit [www.adobe.com](http://www.adobe.com). --- *This product may allow you to extend its functionality by accessing certain features that are hosted online, including CS Live online services (“Online Services”), provided you have a high-speed Internet connection. The Online Services, and some features thereof, may not be available in all countries, languages, and/or currencies and may be discontinued in whole or in part without notice. Use of the Online Services is governed by separate terms of use and by the Online Privacy Policy, and access to some services may require user registration. Some Online Services, including services that are initially offered at no charge, may be subject to additional fees and require a separate subscription. For more details and to review the applicable terms of use and Online Privacy Policy, visit [www.adobe.com](http://www.adobe.com). For more information about CS Live online services, see [www.adobe.com/go/cslive](http://www.adobe.com/go/cslive).* --- Adobe, the Adobe logo, Acrobat, Adobe AIR, Adobe Premiere, After Effects, Creative Suite, Dreamweaver, Fireworks, Flash, Flash Builder, Flash Catalyst, Flex, Illustrator, InDesign, Photoshop, and Soundbooth are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States and/or other countries. Adobe, Adobe AIR, and Adobe Flash Player are trademarks or registered trademarks of Adobe Systems Incorporated in the United States and/or other countries. Mac OS is a trademark of Apple Inc., registered in the U.S. and other countries. Intel and Pentium are trademarks of Intel Corporation in the U.S. and other countries Microsoft, Windows, and Windows Vista are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. Java is a trademark or registered trademark of Sun Microsystems, Inc. All other trademarks are the property of their respective owners. © 2010 Adobe Systems Incorporated. All rights reserved. 1/10 Adobe Flash authoring tools With Creative Suite 5, Adobe delivers a variety of tools with the capability to author interactive content for the Adobe Flash Platform. Each project you work on is unique, presenting various requirements and specifications regarding design, development, and deployment. This comparison chart is a quick reference that will help you choose the best tool or tools for what you want to accomplish. <table> <thead> <tr> <th></th> <th>Flash Professional</th> <th>Flash Catalyst</th> <th>Flash Builder</th> </tr> </thead> <tbody> <tr> <td><strong>Typical projects</strong></td> <td>Rich content, interactive video content, advertising, games</td> <td>User interfaces, rich Internet applications, microsites, prototypes, widgets</td> <td>Rich Internet applications</td> </tr> <tr> <td><strong>Product description</strong></td> <td>An authoring tool that enables you to create immersive experiences that can include video content</td> <td>An interaction design tool that enables you to transform artwork into functional interfaces and interactive content</td> <td>An integrated development environment (IDE) that enables you to develop rich Internet applications</td> </tr> <tr> <td><strong>Project approach</strong></td> <td>Free-form design</td> <td>Structured interaction design</td> <td>Structured development</td> </tr> <tr> <td><strong>Project organization</strong></td> <td>Timeline and frames</td> <td>Pages and states</td> <td>Projects</td> </tr> <tr> <td><strong>Motion capabilities</strong></td> <td>Advanced vector animation</td> <td>Transitions, basic movement</td> <td>Transitions</td> </tr> <tr> <td><strong>Video playback</strong></td> <td>Encoding &amp; advanced playback controls</td> <td>Basic video playback controls</td> <td>Advanced playback controls</td> </tr> <tr> <td><strong>Extensibility</strong></td> <td>ActionScript coding or components</td> <td>Flash Catalyst components or export to Flash Builder to add more functionality</td> <td>Flex coding/components, ActionScript coding/components</td> </tr> <tr> <td><strong>Coding knowledge required</strong></td> <td>Some ActionScript coding</td> <td>None</td> <td>Advanced ActionScript or MXML</td> </tr> <tr> <td><strong>Playback support</strong></td> <td>Flash Player, AIR, Flash Lite, iPhone*</td> <td>Flash Player 10, AIR</td> <td>Flash Player, AIR</td> </tr> </tbody> </table> *The Packager for iPhone tool, included with Flash Professional CS5, compiles ActionScript bytecode into native iPhone application code. iPhone applications are distributed as iPhone application installer (IPA) files, via the iTunes store.*
{"Source-Url": "http://m.softchoice.com/files/pdf/brands/adobe/FC_CS5_whatsnew.pdf", "len_cl100k_base": 5301, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 22590, "total-output-tokens": 5559, "length": "2e12", "weborganizer": {"__label__adult": 0.0008859634399414062, "__label__art_design": 0.0206451416015625, "__label__crime_law": 0.0003998279571533203, "__label__education_jobs": 0.0006380081176757812, "__label__entertainment": 0.0007276535034179688, "__label__fashion_beauty": 0.0003497600555419922, "__label__finance_business": 0.0005478858947753906, "__label__food_dining": 0.0003933906555175781, "__label__games": 0.00235748291015625, "__label__hardware": 0.0015993118286132812, "__label__health": 0.00025081634521484375, "__label__history": 0.00022292137145996096, "__label__home_hobbies": 0.00015425682067871094, "__label__industrial": 0.0002846717834472656, "__label__literature": 0.000400543212890625, "__label__politics": 0.00016129016876220703, "__label__religion": 0.000911712646484375, "__label__science_tech": 0.0025539398193359375, "__label__social_life": 0.0001537799835205078, "__label__software": 0.2236328125, "__label__software_dev": 0.74169921875, "__label__sports_fitness": 0.0002682209014892578, "__label__transportation": 0.000278472900390625, "__label__travel": 0.00032258033752441406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28134, 0.01312]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28134, 0.06923]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28134, 0.88472]], "google_gemma-3-12b-it_contains_pii": [[0, 1885, false], [1885, 6097, null], [6097, 8956, null], [8956, 11786, null], [11786, 15432, null], [15432, 17074, null], [17074, 19371, null], [19371, 24377, null], [24377, 28134, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1885, true], [1885, 6097, null], [6097, 8956, null], [8956, 11786, null], [11786, 15432, null], [15432, 17074, null], [17074, 19371, null], [19371, 24377, null], [24377, 28134, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 28134, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28134, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28134, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28134, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28134, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28134, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28134, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28134, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28134, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28134, null]], "pdf_page_numbers": [[0, 1885, 1], [1885, 6097, 2], [6097, 8956, 3], [8956, 11786, 4], [11786, 15432, 5], [15432, 17074, 6], [17074, 19371, 7], [19371, 24377, 8], [24377, 28134, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28134, 0.09167]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
eabd172e7f2fd2d1b4d76abf502aa78e211089b5
Supporting the Sense-Making Processes of Web Users by Using a Proxy Server Teppo Räisänen Department of Information Processing Science, University of Oulu, Finland Teppo.Raisanen@oulu.fi Abstract This paper presents a study on how we can support knowledge creation - especially a process called comprehension - in a Web 2.0 environment by providing new functionalities to users of existing Web services. The contribution of this paper is twofold. Firstly, a framework for providing new functionalities is presented. Secondly, a prototype Web service is implemented and evaluated. The prototype uses Wikipedia as an example and as a knowledge repository. The emphasis is on the prototype, a service that allows us to 1) insert sticky notes in Wikipedia articles, 2) enhance the translation capabilities of Wikipedia, and 3) highlight texts in Wikipedia. The analysis of the prototype service shows that we can provide new functionalities to Web users with a proxy server and that the implemented tools offers some support for knowledge creation process called comprehension. The translation service proved especially useful. 1. Introduction As technologies and protocols evolve, users and developers invent new ways (and revise existing ones) to utilize the Internet. The evolution of the technologies is most evidently seen on the World Wide Web, where companies like Microsoft, Google and Yahoo! – among others – create new programming environments, paradigms, API’s, and services in an ever-accelerating pace. The second generation of these Internet-based services, which emphasize online collaboration and sharing of knowledge between users, are referred to as Web 2.0. According to O’Reilly [20], some of the key concepts and technologies associated with Web 2.0 are 1) Web as platform, 2) architecture of participation, 3) rich user experience, 4) blogging, and 5) Wikis. Following O’Reilly [20] and Räisänen & Oinas-Kukkonen [25], we define Web 2.0 as a set of novel technologies and philosophies that use the Web as a platform to deliver services\(^1\) that emphasize user participation. With the success of the new Web applications, the knowledge management paradigm is changing its focus from the management of organizational knowledge into management of social knowledge constructed mainly by communities of practice within the Web. One example of how knowledge is created in Web 2.0 environment is the wiki [14]. The most famous example of the power of wiki technology is probably Wikipedia (http://en.wikipedia.org). In this paper, we will use a knowledge management framework called the 7C model [23] to understand the various knowledge creation processes, and how to support them in the Web 2.0 environment. An environment supporting the 7C model has been presented by Räisänen & Oinas-Kukkonen [25], and it provided technologies and concepts that can be applied when designing support for the processes presented in the 7C model. The environment identified some existing tools that can be used to support the 7C processes (e.g. Wiki’s), but it did not specify the needed functionalities very thoroughly. The aim of this paper is to design and implement a simple and extendable prototype that allows new tools to be easily integrated into the 7C environment. To achieve this, we will use a design science approach [13]. Design science attempts to “create things that serve human purposes” [17], e.g. to build and evaluate an information system that would help Wikipedia readers to better comprehend the contents of the articles they read. To evaluate the implemented prototype system, we performed interviews for qualitative analysis to gain insights on how to support comprehension. We argue that this prototype and 7C tools should be implemented as Web services [2]. By doing this, the 7C tools can be used with any other Web services to create environments that offer support where needed. We will also show that when implementing 7C tools as \(^1\) Web service here refers to any Web-based “software applications identified by a URI, whose interfaces and bindings are capable of being defined, described, and discovered as XML artifacts” [26]. Web services, they can be integrated with other Web services by using a proxy server. We demonstrated this by building three simple 7C tools: translation, annotation, and highlight. They were integrated with Wikipedia to help readers learn and understand articles better. The system was then tested with six participants to see if it offers any support for the users. The rest of this paper is organized as follows. Chapter 2 presents background for the study. Chapter 3 presents a solution to provide new functionalities for Web users. Chapter 4 presents a proof-of-concept implementation and a preliminary evaluation of the system. Finally, Chapter 5 discusses the findings and Chapter 6 concludes the paper. 2. Background We use the 7C knowledge creation framework [23] to understand how knowledge is created (and maintained) in the Web 2.0 communities. The 7C model describes the processes through which knowledge is created, shared and applied. It is based on Nonaka and Takeuchi’s [19] SECI-model, and it assumes that new knowledge can emerge from the interaction between tacit and explicit as well as individual and social knowledge. According to the 7C model [23], the key processes in knowledge creation are comprehension, communication, conceptualization and collaboration (see figure 1). 2 The processes in the SECI-model are socialization, externalization, combination and internalization. Figure 1. The 7C model. The 7C model states that the users must be provided a rich environment in which they can interact with existing knowledge and information in order to comprehend something new or innovative [23]. The interaction must go deeper than just browsing or reading existing content [25]. The user must be able to link things together (e.g. tag), and combine and play with the content (e.g. annotate). For example, “an associative link between two knowledge objects would explain to the user that these objects are somehow related or that they have something in common” [25]. Through better interaction with existing knowledge, the users may gain new tacit knowledge. In the 7C knowledge creation spiral, this is a process called comprehension. Oinas-Kukkonen [23] defines it as “a process of surveying and interacting with the external environment, integrating the resulting intelligence with other project knowledge on an ongoing basis in order to identify problems, needs and opportunities.” In psychology, comprehension refers to the understanding of individual stimuli, especially words, sentences or chunks of prose [6]. Oinas-Kukkonen’s definition of comprehension is also equivalent to the process of sense-making, which is defined as “a motivated, continuous effort to understand connections (which can be among people, places, and events) in order to anticipate their trajectories and act effectively” [15]. The rich user experience (see [1]) offered by Web 2.0 technologies offers a good platform for this comprehension. Also the use of hypertext functionalities has been identified to support the comprehension [25]. Examples of possible hypertext functionalities would be annotation, deep linking and tagging. To allow the users to truly interact with the existing knowledge, hypertext must be provided in a richer way than with static Web pages (or even with dynamic Web pages) [25]. Hypertext functionalities can promote options and allow freedom of choice with contextual support. This can provide users with a rich environment for comprehension [23]. For example, Räisänen and Oinas-Kukkonen [25] argue that the usage of “personal and shared tags supports comprehension by providing the kind of associative linking which enables the user to recognize similarities and possibly [...] identify specific needs and opportunities as well as potential problems.” Another way to support comprehension could be to provide the users an optimum user experience called flow [7]. According to studies, they could have a better chance of learning with such an experience [12]. As such, by improving user’s changes of flow, we can support his comprehension, too. Many times this can mean allowing the user to stay within flow [24], as interruptions can mean the optimum user experience is disturbed, and the user may drop ‘out of flow’. Providing the users a way to see how others have interacted with the existing knowledge should also provide the user a way of comprehending something new. The user might be able to think outside of his world view and letting him know how others see the same thing could be an eye-opener. The user must also be able to share his newly gained knowledge and innovations with others, preferably in real-time and face to face. In the 7C model this process is called communication. Many times new ideas might be lost if there is no one to share them with the moment they happen, or at least while they are still “fresh.” The ability to share ideas face to face, and as they emerge, would also be important; we would have a better chance of understanding the shared ideas because the context of where and when the ideas emerged would still be present. When the sharer and receiver are out-of-context, the ability to remember and understand what is being shared diminishes. Conversely, while in-context, the sharer has a better chance of articulating and sharing the ideas, and the receiver has a better chance of understanding them. Sharing the ideas in real time and face to face can help in comprehension as the possible feedback from other persons can help the comprehension process. This helps in reducing premature segmentation [21] as people can refine and represent the ideas before they write them down or enter them into an information system. In another words, users can share ideas “when one’s thoughts are [...] mature enough for externalizing a [...] rationale” [21]. People are often in possession of relevant knowledge in given situations, but for one reason or another they choose not to share it. Such reasons may be lack of confidence, fear of rejection, or uncertainty regarding the consequences of sharing the knowledge. In the Web people can have different identities, which could help. John Doe might be afraid that people will laugh to his ideas but Terminator123 (John’s login name to a Web 2.0 portal) can present the same ideas anonymously. Another reason may be related to the way people equate knowledge with power. For example, employees may want to secure their organizational status by sitting on knowledge, making themselves ‘irreplaceable’ but at the same time detracting from organizational knowledge creation and development. In the Web environment, this should not pose such a big problem as the power of knowledge is not as apparent as it is in organizations. When users are sharing their comprehensions and innovations, they need to reach a shared understanding on the issue at hand. In the 7C model this process is called conceptualization. It is a collective process of forming explicit concepts out of the different comprehensions and ideas that the users have shared. The outputs of the conceptualization process are explicit concepts that can be used to perform the actual task at hand. Many times, users have to reach a consensus to form the concepts. Providing tools to support comprehension and communication also helps in conceptualization as users must share their ideas and understand each other before they can reach shared understanding of the task they are performing. After users have reached a shared understanding through the conceptualization process, they can start applying the produced concepts in group effort called collaboration. It is the process whereby the users produce explicit knowledge. This could be a Wikipedia article, design document, or a set of functions written in Java. Through collaboration, users have a chance of comprehending. Thus the 7C model is a spiral. Through each cycle of the spiral, the group can become better at applying its expertise (this is called collective intelligence in the 7C model). Since the processes of communication (see e.g. [18]) and collaboration (see e.g. [8]) have received a lot of attention, we will not focus on them. Instead we focus on supporting comprehension. This will also indirectly support conceptualization, but studying that is beyond the scope of this paper. Section 2.1 will investigate research finding related to 7C model, and annotation. 2.1 Related research The 7C model has been presented in [23]. It has been used, e.g., to understand online collaboration [28]. Räisänen and Oinas-Kukkonen [25] have defined the system architecture for the 7C knowledge environment. They argued for the use of various Web 2.0 technologies to enable the kind of knowledge creation spiral that the 7C model poses. They found Wikis to be suitable as a basis of the 7C environment. However, they did not identify specific tools that could be used to support various processes. They concluded that the most crucial parts of the 7C model are comprehension and conceptualization. We will follow this line of thinking and concentrate on supporting comprehension of users reading Wikipedia articles. Hypertext functionality and knowledge rationale [25] have been identified to offer support for comprehension. Thus if we can provide users with a new (and useful) functionalities, or allow him to investigate the rationale behind knowledge objects (e.g. Wikipedia articles) he could have better chances of comprehending the information he is reading or studying. Comprehension is typically supported only indirectly in Web-based services like Wikipedia. Both sticky notes and text highlighting can be seen as a way of annotation, and the ability to annotate has been considered a "basic tool for collaboration and exchange of ideas" [4]. As such, annotations should support the 7C processes of communication (i.e., exchange of ideas) and collaboration. Later on this paper we will investigate if they also support comprehension. Annotations are also considered an important part of the semantic Web [3], which is a project that tries to facilitate information exchange by bringing structure to the meaningful content of the Web. Typically, annotations are made [27] 1) for the readers themselves, 2) for the author of the annotated text, or 3) for other readers of the text. They have been found to support concentrated, intensive reading [27], help re-reading, learning, and knowledge sharing [11]. Concentrated, intensive reading combined with improved learning should - in theory - help comprehension, too. There have been many different implementations to support annotations within the Web -- Anotea [14], Crit [29] and eLAWS [11] to name a few. Most of the existing solutions are from the "Web 1.0" era. Thus using Web 2.0 solutions could provide some new insights into annotations. 3. Providing new functionalities to Web users This chapter will present our solution for supporting individual comprehension while taking account of other 7C processes as well. We want our solution to be a Web service that can be used with any existing Web services. This way we can combine our tools with any other Web service and create mashups that support – at least partially – the knowledge creation processes of the 7C model. One way to increase the functionalities of the Web is to use browser plug-ins (or add-ons). They interact with the host application – usually a Web-browser – to provide certain functionalities. An example of a browser plug-in is Flash player, which allows browsers to play Flash animations and movies that are embedded on a Web page. YouTube (www.youtube.com) uses such a plug-in. However, plug-ins have a few drawbacks. For example, the 7C model poses that the better the connectivity of the users, the better we can support the knowledge creation processes [23]. Those users who do not have the right plug-in installed cannot access the system, which lowers connectivity [25]. Plug-ins are also usually browser specific, so we would need to have different versions of the plug-in for different browsers, which again lowers connectivity. We would also need to have plug-ins for mobile environment to have best support for connectivity. So finding a way to provide the same functionality without necessary installs would be better. Implementing 7C tools as Web services could do just that. In fact, we argue that all the 7C tools should be implemented as Web services so that these tools can be used with any other Web services to create environments that offer support where needed. This allows us to use the service from different platforms, both stationary and mobile. This would increase connectivity, which is crucial as “users must have access to the system whether they are working at home or in the office” [25] or on the move. Having 7C tools as Web services should, in theory, allow us to use them via any device that has a suitable browser. As we want the prototype to be extendable, we will use JavaScript to offer the new functionalities. There are many existing JavaScript archives on the Web; using them will help us because the existing scripts are usually tested (i.e. they work and they are bug free) and compatible with various browsers. In this way, we can simply choose existing script and a service we want to use it with, then include the script into our system, usually with only minor changes. Using JavaScript means that we will compromise little with connectivity (as browsers must support JavaScript in order to use the system) but we will still have much better connectivity than e.g. plug-ins would offer. Figure 2 shows the high-level description of how the prototype system works. Similar solutions have been presented before, see e.g. [11]. ![Figure 2. High-level description of the framework.](image-url) In short, the user opens a connection to the 7C Server (arrow 1 in figure 2) and requires it to display the original Web service with enhanced functionalities (or 7C tools). The 7C Server will then send an HTTP request to the original Web service (arrow 2). The original Web service will then return the HTML code (e.g. index.html) to the 7C Server (arrow 3). The 7C server will then include the new functionalities in a form of JavaScript to the original HTML code (arrow 4). Finally, it will return it to the user and his browser will render the original service enhanced with new functionalities. The 7C Server itself works as a proxy between the client and the content (this case Wikipedia). It is a combination of browser and server functionalities: the user must be able to connect to it. Csikszentmihalyi [7] describes flow as "the holistic sensation that people feel when they act with total involvement." In human-computer interaction [5], flow has been shown to increase learning and creativity. Pearce and Howard [24] state that "flow activity is one in which the mind becomes effortlessly focused and engaged, rather than falling prey to distractions." They follow Draper [9], who states that users can "flick in and out" of flow from moment to moment. This view sees flow more as a process than as a state, and any distraction that happens causes the user to flick out of flow. This is especially important if the user was just about to comprehend something. A longer distraction can mean that the user loses the 'opportunity moment’ to learn something. Both learning and creativity help the knowledge creation spiral, and they allow new knowledge to emerge. In fact, without learning it is difficult to see individuals gaining any new knowledge. In the 7C model, learning supports comprehension. Through learning, the user gains new insights which help him to comprehend "problems, needs and opportunities [...] and [...] embodying explicit knowledge in tacit knowledge" [23]. By giving users better chances for learning and creativity (i.e. keeping them in flow), we can also support comprehension. In this study, we aim to help users to stay in flow by improving existing Web translation services. Web translation usually works in two ways. The first way is that the users can copy the link of a selected page and use an existing service to translate the whole page into the target language. The drawback of this is that the outcome of the translation is many times not fluent. The readers of the translation can understand the meaning of the text but something may be lost in the process. For example, phrase identification is a problem that arises when groups of words have a special meaning when they co-occur that is different from the individual meanings of the words [30]. In the context of the Web, when a user has found an interesting Web page, the need to go to a separate translation service may flick him out of flow. The user has to move from the page he is reading into another page where he must insert the URL of the first page and wait for the service to translate it. Then he has to find the sentence he was reading and continue from there. In the second method of translation, the user has to copy and paste (or write) the translated word into the translation service, which then gives him the translation. Again, the user has to leave from the page he was reading (i.e. he could be distracted) to find out the translation. So our aim is to allow the user to translate single words while staying at the page he is reading, so that distraction would be as small as possible. Another advantage that we get by translating only those words that the user does not understand is that we also help him learn new language. By translating the whole page he does not see the original text and cannot associate the translated words to the original words. Of course, this type of solution requires that the user is somewhat familiar with the language he is reading. If the user does not understand the language of the page he is reading, we must use those translation services that translate whole page. 3.2. Richer interaction - annotation and highlighting Another way of supporting comprehension is to allow deeper interaction with existing knowledge stored in the Web. Following Oinas-Kukkonen’s [23] definition (see also [25]) comprehension can be supported by allowing deeper interactions with the external environment, and better tools for surveying it. The translation could be seen as surveying, whereas annotating and highlighting are interaction with the external knowledge (in this case Wikipedia articles). Our solution for this is to allow users to post "sticky notes" and to highlight texts in Wikipedia articles. The sticky notes are like post-it notes that stay in the place they are placed. The user can edit the content of the note. Highlighting allows the user to use the mouse to mark text, e.g. to highlight sections that he sees as important. The type of a sticky note can be a comment, a tag, a question, an answer or an argument. Comments, tags and questions refer to specific Wikipedia pages, whereas answers refer to question-notes and arguments to answer-notes. Comments can be anything from saying ‘I like this’ to ‘I disagree.’ Tags are in essence short comments (usually with only one word), e.g. ‘Design’ or ‘San Francisco’ that categorize the article in some way. Questions are notes that ask something about the article: ‘Why is this sentence written this way?’ Answers are responses to the questions-notes and arguments-notes relate to answers. Arguments can be against or for the answer it is linked to. Asking questions, answering them and arguing against and for the answers provides valuable knowledge on why certain parts of the article are written the way they are. In fact, typing the nodes as question (Q), answer (A) or argument (R) could help capture some of the knowledge rationale behind the article [25]. This method is called QAR [22], and it is more commonly used to capture design rationale. We believe it can also capture knowledge. The users can, for example, ask questions about the parts of the article they do not fully understand. As other users answer the question, more rationale about the article is stored as well. Räisänen and Oinas-Kukkonen [25] propose that each produced knowledge object should be stored with the rationale why it is built the way it is, i.e. Knowledge Rationale. Annotations can also support conceptualization. As conceptualization is a social process, the decisions on how to build an object should be based on reasoning and collective wisdom of the group. An example of such an object could be a Wikipedia article. Storing the rationale of the articles in a structured way allows us to browse the rationale for example. Currently, each Wikipedia article has a discussion page where people can argue about the contents of the article page and these discussions contain much of the rationale behind the articles. Using a more structured way of organizing the discussions would provide a clearer view on why the document is built the way it is. The highlighting tool is simple. Whenever a user presses the Alt-key and highlights some text with a mouse, the highlighted text will stay highlighted. This allows him to mark important chapters or sentences (as he would with paper and a magic marker) that he finds interesting. Highlighting together with sticky notes allows users to show to which part of the text the note actually refers, e.g. note of “This sentence does not make sense” next to a highlighted sentence allows users to relate the note and the sentence. Table 1 summarizes the tools. The translation tool helps the user stay in flow. Annotation and highlight tools enable deeper interaction with existing knowledge. By integrating these tools in Wikipedia, we argue that we can support individual users’ comprehension. By allowing users to type the nodes, we can help comprehension even more. The power of annotations and highlights would further increase if we allowed them to be shared with other users. If users could read what others have annotated in the same article we are reading, we could have a better chance of understanding something new. However, it is beyond the scope of this paper to study the affects of shared annotation and highlights. <table> <thead> <tr> <th>7C tool</th> <th>Functionality</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>Translation</td> <td>Translate single words that the user double-clicks</td> <td>Allow user to better stay in flow by not requiring him to use external translation services</td> </tr> <tr> <td>Annotation</td> <td>Insert “sticky notes” to Web page. User can edit the notes</td> <td>Allow user to attach annotations (e.g. comments, questions, answers or arguments) to Web page.</td> </tr> <tr> <td>Highlight</td> <td>User can highlight texts in the page by selecting text with mouse</td> <td>Allow user to highlight those text sections he considers important / difficult etc.</td> </tr> </tbody> </table> In addition, the tools described in Table 1 are by no means an exclusive set of tools supporting comprehension. Utilizing rich links or various types of metadata [4], for example, could also provide some support. For our proof-of-concept implementation, the tools of Table 1 seem feasible and when combined they should offer some support for comprehension and possibly communication, conceptualization and collaboration as well. 4. Proof-of-concept implementation and preliminary evaluation We implemented the features described in Table 1 on top of Wikipedia. We have argued in previous sections that each of them offers some support for comprehension. The first feature is translation of individual words. The users can double-click on any non-link words and the service will translate that word from English to selected language (only to those languages supported by Google Translator). The service uses AJAX, so the translation requests are made without the need to refresh the whole page. We reused existing Javascript functions found on the Internet and implemented a small script to connect the double-click event to Google’s translation service. For the second feature (annotations) and third feature (highlight), we found existing scripts that we only slightly modified. We decided to let the users determine the type of annotation (e.g. note, question, answer or argument). The users can also link the notes to each other, e.g. certain answers can be linked to questions, and arguments can be linked to answers (producing Q-A-R as discussed earlier). We used Java to implement the 7C server-side. The server connects to Wikipedia and includes the wanted functionalities (double-click translation, annotations and highlight) to the page. The server also handles database connections storing each translation and annotation. Figure 3. Screenshot of the system, including 2 notes, highlight text ("Pori is known, among other things"), and a translation of “foothold” from English to Spanish. Figure 3 is a screen shot of the system displaying all three new functionalities on top of Wikipedia page about Pori (a small town in Finland). 4.1. Preliminary evaluation To study the proof-of-concept implementation, we interviewed six users between 27 and 35 years old. Of the participants, two were Japanese, two Finnish, one Chinese and one American. Four were male and two female. They were all experienced with using the Web as they used it daily. They all worked in IT (they were all visiting scholars at a well-known American university) and used Wikipedia (mostly for search) approximately 3-5 times a week. They all considered Wikipedia as excellent source of information. They were asked to use the system for 15 minutes while the authors observed, after which we conducted interviews lasting around another 15 minutes. We then performed positivistic analysis of the interviews. We tested the translation service using those participants who did not speak English as a first language. In Wikipedia there are 2.2 million articles written in English (as of 1st of March, 2008). Germany has the second most articles (716,000). As there are three times more English content than any other language in Wikipedia, providing non-native English readers better tools for reading should help users to understand these articles better. The first result of the study was that the prototype works. It is quite simple to include new JavaScript code in the 7C server and enhance the functionalities of Wikipedia, or any other Web service that uses normal W3C standards. Problems can arise when we try to include new functionalities in a service that is mostly scripting based (e.g. Facebook, www.facebook.com). 4.1.1. Translation. The first impression of the translation was that “It is good and it works.” One of the main benefits seemed to be that it is quick. There is no need to copy and paste: “This is very useful, especially in my situation. I know the grammar but I don't have the vocabulary and I am using a dictionary every time when I read article[s]. I copy and paste, [and it is] hard and it takes [a] long time, for me this is easy to read." One participant also noted that it is “also quite natural that [when] you click a word you don’t understand” the system translates the word. One participant also indicated that the double-click translation might also support flow: "if you read the article with full concentration, you need to translate the word quickly. If you have to use [a] separate dictionary, you can lose your trail of thoughts." This could indicate that the translation tool causes less distraction, and would not reduce their changes of flow. There were a few possible problems: “sometimes word translated literally is not what it means in the context [i.e. in the sentence where it is used], but usually you can guess what it means.” Allowing the user to stay in the Wikipedia article while translating the word might indeed help him to understand the article. He can see the sentence and the translation on the same page. He can then translate the whole sentence and continue reading the article. Sometimes the words might be too difficult so that even a translation does not work. This happened when one of the participants tried the system with medical Wikipedia articles. Maybe the system should show the definition of the word as well or offer a link to some service providing definitions. 4.1.2. Annotations. Participants were used to annotating papers that they read. They also felt having the ability annotate Wikipedia would be useful. Writing annotations could help readers: “It probably helps when I write it because I have to think what I write.” In a way, the annotations could be used to store new ideas and the articles the annotations are attached could provide the context. This ability to link user annotations to the article probably supports 3 From http://en.wikipedia.org/ comprehension or at least remembering what the annotations were for. Perhaps more useful than writing would be the ability to read annotations that the user himself or someone else had written. Having someone else’s annotations “would speed up reading the article.” However, if the annotations were from someone whom the reader would not know, it might cause troubles: “Reading someone else’s annotations could feel annoying if they were not relevant to me.” So sharing annotations with colleagues could be more useful than sharing annotations with strangers. A few participants said that “the ability to share annotations would yield the biggest benefits.” The prototype of the system did not allow users to share their annotations with each other. The ability to put links or pictures into the annotations was seen as great, especially when “the link [...] is relevant to the article.” 4.1.3. Highlight. Highlights helped people to re-read articles: “When I go back, I only read the highlighted parts.” This speeds up remembering articles read previously. It probably also allows users to recall things. Users also indicated that many benefits of the annotations also apply to highlights. For example, the ability to share highlights would be great. Seeing what others have highlighted would allow participants to understand each other but it “might also distract you,” especially if other users’ highlights would differ greatly from one’s own. This could indirectly help communication and conceptualization if users knew the things they had highlighted differently, e.g. why somebody sees some part of the text important and someone else sees another part. 5. Discussion The proof-of-concept implementation of the system worked well. It provided the new functionalities to Wikipedia readers. The participants also found them useful. The translation service especially was seen as a great addition to Wikipedia. All of the participants said that annotation and highlight services could help them. It was not always so clear if or how they actually improved comprehension. The participants did feel that they could help but as they did not use the system in real situations, they could not be sure how. This should be studied by using the system more thoroughly. In addition, the ability to share highlight and annotations should be included in the future versions of the system. It could be that the true support for comprehension comes when we can see what others are doing and see how our own thinking differs from that. The translation service was seen as the most useful. The reason for this is probably the fact that only one of the participants spoke English as a first language. The proof-of-concept implementation of the system did not allow users to share their annotations with each other. The ability to put links or pictures into the annotations was seen as great, especially when “the link [...] is relevant to the article.” Download time is increased (connection goes through a proxy server). With fast connections, download time is not a big issue. However, if we use the system with a mobile device, the increase in download time could be crucial as download times are already longer with mobile devices. To tackle the first drawback, the annotations and highlights must be attached to certain versions of the article. This way when someone, for example, deletes a sentence we have highlighted, we can still find the highlight in the previous versions of the article. Still, keeping annotations and highlight up to date can be a challenge. For example, do users have to transfer the annotations manually from older versions of the article to the new one? In addition to these drawbacks using a proxy server might also disturb knowledge creation. Those users who did not use Wikipedia through the proxy would not see the annotations made by other users. However, if the proxy-users would comprehend something they could probably better contribute to Wikipedia articles. However, if some new tool were especially beneficial it could always be implemented to Wikipedia, too. 6. Conclusion This paper presented a study about how we can support sense-making processes of Web users by using a proxy server. The contribution of this paper is twofold. Firstly the paper presented a way to use a proxy server to provide new functionalities to Web users. The new functionalities are included into existing Web services in a form of JavaScript code. The proxy server will first download the original HTML code and include the JavaScript into it before sending the file to the browser. This way the user does not need separate browser plug-ins to support the new functionalities. The service works with any device supporting JavaScript without any installs. Secondly, as a proof-of-concept implementation, we implemented a prototype of the server and three simple tools to support the 7C process called comprehension. To the authors knowledge this is the first study that tries to provide functionalities supporting the 7C model. The tools could be used while reading Wikipedia articles. Analysis of the tools revealed that users did indeed find them helpful in learning (translation, annotation and highlight) and staying in flow (translation). Based on our preliminary evaluation we can not make more thorough analysis. However, the translation tool was seen as excellent addition to all non-native English speakers. We conclude that they offer some support for comprehension. However, more studies are needed to investigate the system in real-life situations. Also, providing new functionalities to existing Web services (whether by using a plug-ins or proxy servers) seems to offer interesting possibilities in supporting sense-making processes of individual users. As a future work, the complete set of functionalities that support the whole 7C knowledge creation spiral should be identified and implemented into the 7C Server. Especially the ability to share annotations with other users would be helpful in supporting comprehension. Using QAR we could also support conceptualization. The solution presented in this paper should also be tested more rigorously, e.g. by using an experiment where the prototype system would be compared to normal use of Wikipedia. Acknowledgements The author would like to thank the reviewers, members of the Persuasive Technology Lab at Stanford and professor Harri Oinas-Kukkonen for their feedback and comments. References Create the Dynamics of Innovation, Oxford University Press.
{"Source-Url": "https://www.computer.org/csdl/proceedings/hicss/2009/3450/00/08-04-05.pdf", "len_cl100k_base": 7937, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 32044, "total-output-tokens": 10159, "length": "2e12", "weborganizer": {"__label__adult": 0.0003964900970458984, "__label__art_design": 0.0016393661499023438, "__label__crime_law": 0.0004258155822753906, "__label__education_jobs": 0.023162841796875, "__label__entertainment": 0.0003180503845214844, "__label__fashion_beauty": 0.000274658203125, "__label__finance_business": 0.0009217262268066406, "__label__food_dining": 0.0005235671997070312, "__label__games": 0.0007915496826171875, "__label__hardware": 0.0010538101196289062, "__label__health": 0.0008187294006347656, "__label__history": 0.0008554458618164062, "__label__home_hobbies": 0.000179290771484375, "__label__industrial": 0.00042629241943359375, "__label__literature": 0.0018777847290039065, "__label__politics": 0.0004737377166748047, "__label__religion": 0.0007300376892089844, "__label__science_tech": 0.204345703125, "__label__social_life": 0.0004470348358154297, "__label__software": 0.11199951171875, "__label__software_dev": 0.64697265625, "__label__sports_fitness": 0.0002510547637939453, "__label__transportation": 0.0006284713745117188, "__label__travel": 0.0003571510314941406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 44800, 0.02547]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 44800, 0.56352]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 44800, 0.93354]], "google_gemma-3-12b-it_contains_pii": [[0, 4172, false], [4172, 8641, null], [8641, 14027, null], [14027, 18816, null], [18816, 23489, null], [23489, 28744, null], [28744, 33118, null], [33118, 37385, null], [37385, 42470, null], [42470, 44800, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4172, true], [4172, 8641, null], [8641, 14027, null], [14027, 18816, null], [18816, 23489, null], [23489, 28744, null], [28744, 33118, null], [33118, 37385, null], [37385, 42470, null], [42470, 44800, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 44800, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 44800, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 44800, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 44800, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 44800, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 44800, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 44800, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 44800, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 44800, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 44800, null]], "pdf_page_numbers": [[0, 4172, 1], [4172, 8641, 2], [8641, 14027, 3], [14027, 18816, 4], [18816, 23489, 5], [23489, 28744, 6], [28744, 33118, 7], [33118, 37385, 8], [37385, 42470, 9], [42470, 44800, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 44800, 0.03448]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
3d3d4ac2fe0c54445f5e39a79c9fac47c7aac0de
Memory Management Background - Program must be brought into memory and placed within a process for it to be run - **Input queue** – collection of processes on the disk that are waiting to be brought into memory to run the program - User programs go through several steps before being run Address binding of instructions and data to memory addresses can happen at three different stages: - **Compile time**: If a memory location is known *a priori*, **absolute code** can be generated; must recompile code if the starting location changes. - **Load time**: Must generate **relocatable code** if memory location is not known at compile time. - **Execution time**: Binding delayed until run time if the process can be moved during its execution from one memory segment to another. Need hardware support for address maps (e.g., **base** and **limit** registers). Multistep Processing of a User Program - Source program - Compiler or assembler - Compile time - Object module - Linkage editor - Load time - Load module - Loader - Execution time (run time) - In-memory binary memory image - Dynamic linking - Dynamically loaded system library - System library Logical vs. Physical Address Space The concept of a logical address space that is bound to a separate physical address space is central to proper memory management: - **Logical address** – generated by the CPU; also referred to as virtual address - **Physical address** – address seen by the memory unit Logical and physical addresses are the same in compile-time and load-time address-binding schemes; logical (virtual) and physical addresses differ in the execution-time address-binding scheme. Memory-Management Unit (MMU) - Hardware device that maps virtual/logical to physical address - The MMU consists of one or more “relocation registers” - The value in an MMU relocation register is added to every address generated by a user process at the time it is sent to memory - As such, the user program deals with *logical* addresses; it never sees the *real* physical addresses Dynamic relocation using a relocation register - CPU - Logical address: 346 - MMU - Relocation register: 14000 - Physical address: 14346 - Memory Dynamic Loading - Routine is not loaded until it is called - Better memory-space utilization; an unused routine is never loaded - Useful when large amounts of code are needed to handle infrequently occurring cases - No special support from the operating system is required, implemented through program design Dynamic Linking - Linking postponed until execution time - Small piece of code, **stub**, used to locate the appropriate memory-resident library routine - Stub replaces itself with the address of the routine, and executes the routine - Operating system needed to check if routine is in processes’ memory - Dynamic linking is particularly useful for libraries Swapping - A process can be swapped temporarily out of memory to a backing store (e.g. disk), and then brought back into memory for continued execution - **Backing store** – fast disk large enough to accommodate copies of all memory images for all users; must provide direct access to these memory images - **Roll out, roll in** – swapping variant used for priority-based scheduling algorithms; lower-priority process is swapped out so higher-priority process can be loaded and executed - Major part of swap time is transfer time; total transfer time is directly proportional to the amount of memory swapped - Modified versions of swapping are found on many systems (i.e., UNIX, Linux, and Windows) Schematic View of Swapping 1. Swap out 2. Swap in Contiguous Allocation - Main memory usually divided into two partitions: - Resident operating system, usually held in low memory with interrupt vector - User processes then held in high memory - Single-partition allocation - Relocation-register scheme used to protect user processes from each other, and from changing operating-system code and data - Relocation register contains value of smallest physical address; limit register contains range of logical addresses – each logical address must be less than the limit register A base and a limit register define a logical address space. HW address protection with base and limit registers CPU address ≥ no base ≥ yes base + limit < no trap to operating system monitor—addressing error memory Contiguous Allocation (Cont.) - Multiple-partition allocation - *Hole* – block of available memory; holes of various size are scattered throughout memory - When a process arrives, it is allocated memory from a hole large enough to accommodate it - Operating system maintains information about: a) allocated partitions b) free partitions (hole) Dynamic Storage-Allocation Problem How to satisfy a request of size $n$ from a list of free holes - **First-fit**: Allocate the *first* hole that is big enough - **Best-fit**: Allocate the *smallest* hole that is big enough; must search entire list, unless ordered by size. Produces the smallest leftover hole. - **Worst-fit**: Allocate the *largest* hole; must also search entire list. Produces the largest leftover hole. First-fit and best-fit better than worst-fit in terms of speed and storage utilization Fragmentation - **External Fragmentation** – total memory space exists to satisfy a request, but it is not contiguous - **Internal Fragmentation** – allocated memory may be slightly larger than requested memory; this size difference is memory internal to a partition, but not being used - Reduce external fragmentation by **compaction** - Shuffle memory contents to place all free memory together in one large block - Compaction is possible *only* if relocation is dynamic, and is done at execution time - I/O problem -Latch job in memory while it is involved in I/O -Do I/O only into OS buffers Paging - Logical address space of a process can be noncontiguous; process is allocated physical memory whenever the latter is available. - Divide physical memory into fixed-sized blocks called **frames** (size is power of 2, between 512 bytes and 8192 bytes). - Divide logical memory into blocks of same size called **pages**. - Keep track of all free frames. - To run a program of size $n$ pages, need to find $n$ free frames and load program. - Set up a page table to translate logical to physical addresses. - Internal fragmentation. Address Translation Scheme - Address generated by CPU is divided into: - Page number \((p)\) – used as an index into a page table which contains base address of each page in physical memory - Page offset \((d)\) – combined with base address to define the physical memory address that is sent to the memory unit Address Translation Architecture ![Diagram of address translation architecture] - CPU - Logical address - Physical address - Page table - Physical memory f0000 ... 0000 f1111 ... 1111 Paging Example <table> <thead> <tr> <th>Logical Memory</th> <th>Page Table</th> <th>Frame Number</th> </tr> </thead> <tbody> <tr> <td>page 0</td> <td>0</td> <td>0</td> </tr> <tr> <td>page 1</td> <td>1</td> <td>1</td> </tr> <tr> <td>page 2</td> <td>2</td> <td>2</td> </tr> <tr> <td>page 3</td> <td>3</td> <td>3</td> </tr> </tbody> </table> Physical Memory: - page 0 - page 1 - page 2 - page 3 Paging Example ``` <table> <thead> <tr> <th>Logical Memory</th> <th>Page Table</th> <th>Physical Memory</th> </tr> </thead> <tbody> <tr> <td>0 a b c d</td> <td>0 5</td> <td>0 i j k l</td> </tr> <tr> <td>1 e f g h</td> <td>1 6</td> <td>1 m n o p</td> </tr> <tr> <td>2 i j k l</td> <td>2 1</td> <td>2 a b c d</td> </tr> <tr> <td>3 m n o p</td> <td>3 2</td> <td>3 e f g h</td> </tr> <tr> <td>4 i j k l</td> <td></td> <td>4 i j k l</td> </tr> <tr> <td>5 e f g h</td> <td></td> <td>5 e f g h</td> </tr> <tr> <td>6 i j k l</td> <td></td> <td>6 i j k l</td> </tr> <tr> <td>7 m n o p</td> <td></td> <td>7 m n o p</td> </tr> <tr> <td>8 i j k l</td> <td></td> <td>8 i j k l</td> </tr> <tr> <td>9 m n o p</td> <td></td> <td>9 m n o p</td> </tr> <tr> <td>10 i j k l</td> <td></td> <td>10 i j k l</td> </tr> <tr> <td>11 m n o p</td> <td></td> <td>11 m n o p</td> </tr> <tr> <td>12 i j k l</td> <td></td> <td>12 i j k l</td> </tr> <tr> <td>13 m n o p</td> <td></td> <td>13 m n o p</td> </tr> <tr> <td>14 i j k l</td> <td></td> <td>14 i j k l</td> </tr> <tr> <td>15 m n o p</td> <td></td> <td>15 m n o p</td> </tr> <tr> <td>16 i j k l</td> <td></td> <td>16 i j k l</td> </tr> <tr> <td>17 m n o p</td> <td></td> <td>17 m n o p</td> </tr> <tr> <td>18 i j k l</td> <td></td> <td>18 i j k l</td> </tr> <tr> <td>19 m n o p</td> <td></td> <td>19 m n o p</td> </tr> <tr> <td>20 i j k l</td> <td></td> <td>20 i j k l</td> </tr> <tr> <td>21 m n o p</td> <td></td> <td>21 m n o p</td> </tr> <tr> <td>22 i j k l</td> <td></td> <td>22 i j k l</td> </tr> <tr> <td>23 m n o p</td> <td></td> <td>23 m n o p</td> </tr> <tr> <td>24 i j k l</td> <td></td> <td>24 i j k l</td> </tr> <tr> <td>25 m n o p</td> <td></td> <td>25 i j k l</td> </tr> <tr> <td>26 i j k l</td> <td></td> <td>26 i j k l</td> </tr> <tr> <td>27 m n o p</td> <td></td> <td>27 i j k l</td> </tr> <tr> <td>28 i j k l</td> <td></td> <td>28 i j k l</td> </tr> </tbody> </table> ``` Free Frames Before allocation: - Page 0 - Page 1 - Page 2 - Page 3 After allocation: - Page 0 - Page 1 - Page 2 - Page 3 New process page table: - 0: 14 - 1: 13 - 2: 18 - 3: 20 Implementation of Page Table - Page table is kept in main memory - *Page-table base register* (PTBR) points to the page table - *Page-table length register* (PRLR) indicates size of the page table - In this scheme every data/instruction access requires two memory accesses. One for the page table and one for the data/instruction. - This two-memory access problem can be ameliorated by the use of a special, fast-lookup, hardware cache called *associative memory* or *translation look-aside buffers (TLBs)* Associative Memory - Associative memory – parallel search <table> <thead> <tr> <th>Page #</th> <th>Frame #</th> </tr> </thead> <tbody> <tr> <td></td> <td></td> </tr> <tr> <td></td> <td></td> </tr> <tr> <td></td> <td></td> </tr> <tr> <td></td> <td></td> </tr> </tbody> </table> Address translation (A’, A’’) - If A’ is in associative register, get frame # out - Otherwise get frame # from page table in memory Paging Hardware With TLB - CPU - Logical address - p - d - Page number - Frame number - TLB - TLB hit - TLB miss - Physical address - Physical memory - Page table - f Effective Access Time - Associative Lookup = $\varepsilon$ time units - Assume memory cycle time is $\tau$ time units - Hit ratio $\alpha$ – fraction of time that a page number is found in the associative registers; ratio related to number of associative registers - Assume simultaneous query of TLB and page table entry; cancel read of page table entry if TLB hit **Effective Access Time (EAT)** $$t_{\text{eff}} = (\tau + \varepsilon)\alpha + 2\tau(1 - \alpha)$$ $$= \alpha\tau + \alpha\varepsilon + 2\tau - 2\alpha\tau$$ $$= 2\tau - \alpha\tau + \alpha\varepsilon$$ $$= (2 - \alpha + \alpha\varepsilon/\tau)\tau$$ - Typical value for $\varepsilon/\tau$ is 1/5 - Look at limiting cases - if $\alpha == 0$, $t_{\text{eff}} = 2\tau$ - if $\alpha == 1$, $t_{\text{eff}} = 1.2\tau$ Memory Protection - Memory protection implemented by associating a protection bit with each frame - **Valid-invalid** bit attached to each entry in the page table: - “valid” indicates that the associated page is in the process’ logical address space, and is thus a legal page - “invalid” indicates that the page is not in the process’ logical address space Valid (v) or Invalid (i) Bit In A Page Table <table> <thead> <tr> <th>Frame Number</th> <th>Valid-Invalid Bit</th> <th>Page</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>v</td> <td>page 0</td> </tr> <tr> <td>1</td> <td>v</td> <td>page 1</td> </tr> <tr> <td>2</td> <td>v</td> <td>page 2</td> </tr> <tr> <td>3</td> <td>v</td> <td>page 3</td> </tr> <tr> <td>4</td> <td>v</td> <td>page 4</td> </tr> <tr> <td>5</td> <td>v</td> <td>page 5</td> </tr> <tr> <td>6</td> <td>i</td> <td></td> </tr> <tr> <td>7</td> <td>i</td> <td></td> </tr> </tbody> </table> Page Table Page Table Structure - Hierarchical Paging - Hashed Page Tables - Inverted Page Tables Hierarchical Page Tables • Break up the logical address space into multiple page tables • A simple technique is a two-level page table Two-Level Paging Example A logical address (on a 32-bit machine with 4K page size) is divided into: - a page number consisting of 20 bits - a page offset consisting of 12 bits Since the page table is paged, the page number is further divided into: - a 10-bit page number - a 10-bit page offset Thus, a logical address is as follows: <table> <thead> <tr> <th>page number</th> <th>page offset</th> </tr> </thead> <tbody> <tr> <td>$p_i$</td> <td>$p_2$</td> </tr> </tbody> </table> | 10 | 10 | 12 | where $p_i$ is an index into the outer page table, and $p_2$ is the displacement within the page of the outer page table. Two-Level Page-Table Scheme Address-Translation Scheme - Address-translation scheme for a two-level 32-bit paging architecture ![Diagram of address-translation scheme for a two-level 32-bit paging architecture] - Logical address: \( p_1 \ p_2 \ d \) - Outer page table - Page of page table Hashed Page Tables - Common in address spaces > 32 bits - The virtual page number is hashed into a page table. This page table contains a chain of elements hashing to the same location. - Virtual page numbers are compared in this chain searching for a match. If a match is found, the corresponding physical frame is extracted. Hashed Page Table logical address \[ \begin{array}{c|c} p & d \end{array} \] physical address \[ \begin{array}{c|c} r & d \end{array} \] hash function hash table physical memory \[ \begin{array}{c|c|c} q & s & \ldots \end{array} \] \[ \begin{array}{c|c|c} p & r & \ldots \end{array} \] Inverted Page Table - One entry for each frame of memory - Entry consists of the virtual address of the page stored in that real memory location, with information about the process that owns that page - Decreases memory needed to store each page table, but increases time needed to search the table when a page reference occurs - Use hash table to limit the search to one — or at most a few — page-table entries Inverted Page Table Architecture Shared Pages • **Shared code** ◦ One copy of read-only (reentrant) code shared among processes (i.e., text editors, compilers, window systems). ◦ Shared code must appear in same location in the logical address space of all processes. • **Private code and data** ◦ Each process keeps a separate copy of the code and data. ◦ The pages for the private code and data can appear anywhere in the logical address space. Shared Pages Example Process $P_1$ - ed 1 - ed 2 - ed 3 - data 1 Page table for $P_1$ - ed 1 - ed 2 - ed 3 - data 2 Process $P_2$ - ed 1 - ed 2 - ed 3 - data 2 Page table for $P_2$ - ed 1 - ed 2 - ed 3 Process $P_3$ - ed 1 - ed 2 - ed 3 - data 3 Page table for $P_3$ - ed 1 - ed 2 - ed 3 - data 1 - data 3 - ed 1 - ed 2 - ed 3 - data 2 - data 3 Segmentation - Memory-management scheme that supports user view of memory - A program is a collection of segments. A segment is a logical unit such as: - main program, - procedure, - function, - method, - object, - local variables, global variables, - common block, - stack, - symbol table, arrays User’s View of a Program - subroutine - stack - symbol table - main program - logical address CIS 415, Fall 2016 Memory Management Logical View of Segmentation user space physical memory space CIS 415, Fall 2016 Memory Management • Logical address consists of a two tuple: \(<\text{segment-number}, \text{offset}>,\) • **Segment table** – maps two-dimensional physical addresses; each table entry has: ◦ **base** – contains the starting physical address where the segments reside in memory ◦ **limit** – specifies the length of the segment • **Segment-table base register (STBR)** points to the segment table’s location in memory • **Segment-table length register (STLR)** indicates number of segments used by a program; segment number \(s\) is legal if \(s < \text{STLR}\) Segmentation Architecture (Cont.) - **Relocation.** - dynamic - by segment table - **Sharing.** - shared segments - same segment number - **Allocation.** - first fit/best fit - external fragmentation Protection. With each entry in segment table associate: - validation bit = 0 ⇒ illegal segment - read/write/execute privileges Protection bits associated with segments; code sharing occurs at segment level. Since segments vary in length, memory allocation is a dynamic storage-allocation problem. A segmentation example is shown in the following diagram. Address Translation Architecture ![Diagram of address translation architecture] 1. CPU 2. s d 3. limit base 4. segment table 5. < yes 6. no 7. trap: addressing error 8. physical memory Example of Segmentation ![Segmentation Diagram] The diagram illustrates the concept of segmentation in memory management. Each segment contains different parts of the program, such as subroutine, stack, and symbol table. The logical address space is divided into segments, and each segment has a corresponding base and limit in the segment table. The physical memory shows how these segments are mapped into memory addresses. Sharing of Segments ![Diagram of memory management showing sharing of segments between processes P1 and P2.] - **Segment 0** - Editor - Logical memory process P1 - **Segment 1** - Data 1 - Logical memory process P1 - **Segment 0** - Editor - Logical memory process P2 - **Segment 1** - Data 2 - Logical memory process P2 - **Physical Memory** - Editor - Data 1 - Data 2 - Other segments - **Segment Table** - Process P1: - Limit: 43062 - Base: 43062 - Process P2: - Limit: 90003 - Base: 90003 Segmentation with Paging – MULTICS - The MULTICS system solved problems of external fragmentation and lengthy search times by paging the segments. - Solution differs from pure segmentation in that the segment-table entry contains not the base address of the segment, but rather the base address of a page table for this segment. MULTICS Address Translation Scheme The diagram illustrates the MULTICS Address Translation Scheme. It involves the following steps: 1. **Logical Address** - s d 2. **STBR** - Adding segment length and page-table base to obtain the segment table. 3. **Segment Table** - Checking if the address is valid. - If valid (yes), proceed; if not (no), trap. 4. **Trap** - If invalid, perform a trap. 5. **Physical Address** - Adding f and d' to obtain the physical address. 6. **Memory** - Access memory with the physical address. Segmentation with Paging – Intel 386 - As shown in the following diagram, the Intel 386 uses segmentation with paging for memory management with a two-level paging scheme. Intel 30386 Address Translation The diagram illustrates the process of address translation in the Intel 30386 processor. It begins with a logical address, which is composed of a selector and an offset. The selector points to a descriptor table, which contains segment descriptors. The segment descriptor is then added to the offset to form a linear address. This linear address is broken down into a directory, a page, and an offset, which are used to access the page directory and page table. The page directory contains directory entries, while the page table contains page table entries. The final result is a page frame, which is the physical address.
{"Source-Url": "https://classes.cs.uoregon.edu/16F/cis415/Lectures/Lecture11-Memory.pdf", "len_cl100k_base": 5495, "olmocr-version": "0.1.50", "pdf-total-pages": 53, "total-fallback-pages": 0, "total-input-tokens": 70280, "total-output-tokens": 7042, "length": "2e12", "weborganizer": {"__label__adult": 0.0004711151123046875, "__label__art_design": 0.0006589889526367188, "__label__crime_law": 0.0004792213439941406, "__label__education_jobs": 0.0007390975952148438, "__label__entertainment": 8.606910705566406e-05, "__label__fashion_beauty": 0.00023174285888671875, "__label__finance_business": 0.00030493736267089844, "__label__food_dining": 0.0004041194915771485, "__label__games": 0.0011806488037109375, "__label__hardware": 0.027099609375, "__label__health": 0.0004329681396484375, "__label__history": 0.00040268898010253906, "__label__home_hobbies": 0.0003077983856201172, "__label__industrial": 0.0016279220581054688, "__label__literature": 0.00024068355560302737, "__label__politics": 0.00027751922607421875, "__label__religion": 0.0006566047668457031, "__label__science_tech": 0.0916748046875, "__label__social_life": 6.103515625e-05, "__label__software": 0.0114288330078125, "__label__software_dev": 0.859375, "__label__sports_fitness": 0.0004949569702148438, "__label__transportation": 0.0009937286376953125, "__label__travel": 0.00021696090698242188}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 19223, 0.04039]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 19223, 0.30159]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 19223, 0.80757]], "google_gemma-3-12b-it_contains_pii": [[0, 18, false], [18, 291, null], [291, 865, null], [865, 1198, null], [1198, 1698, null], [1698, 2084, null], [2084, 2235, null], [2235, 2545, null], [2545, 2905, null], [2905, 3609, null], [3609, 3660, null], [3660, 4197, null], [4197, 4257, null], [4257, 4424, null], [4424, 4783, null], [4783, 5296, null], [5296, 5909, null], [5909, 6447, null], [6447, 6763, null], [6763, 6950, null], [6950, 7302, null], [7302, 8846, null], [8846, 9026, null], [9026, 9534, null], [9534, 9853, null], [9853, 10029, null], [10029, 10820, null], [10820, 11183, null], [11183, 11693, null], [11693, 11781, null], [11781, 11918, null], [11918, 12527, null], [12527, 12555, null], [12555, 12820, null], [12820, 13150, null], [13150, 13445, null], [13445, 13858, null], [13858, 13891, null], [13891, 14314, null], [14314, 14666, null], [14666, 14983, null], [14983, 15116, null], [15116, 15219, null], [15219, 15775, null], [15775, 15990, null], [15990, 16348, null], [16348, 16535, null], [16535, 16963, null], [16963, 17507, null], [17507, 17838, null], [17838, 18394, null], [18394, 18567, null], [18567, 19223, null]], "google_gemma-3-12b-it_is_public_document": [[0, 18, true], [18, 291, null], [291, 865, null], [865, 1198, null], [1198, 1698, null], [1698, 2084, null], [2084, 2235, null], [2235, 2545, null], [2545, 2905, null], [2905, 3609, null], [3609, 3660, null], [3660, 4197, null], [4197, 4257, null], [4257, 4424, null], [4424, 4783, null], [4783, 5296, null], [5296, 5909, null], [5909, 6447, null], [6447, 6763, null], [6763, 6950, null], [6950, 7302, null], [7302, 8846, null], [8846, 9026, null], [9026, 9534, null], [9534, 9853, null], [9853, 10029, null], [10029, 10820, null], [10820, 11183, null], [11183, 11693, null], [11693, 11781, null], [11781, 11918, null], [11918, 12527, null], [12527, 12555, null], [12555, 12820, null], [12820, 13150, null], [13150, 13445, null], [13445, 13858, null], [13858, 13891, null], [13891, 14314, null], [14314, 14666, null], [14666, 14983, null], [14983, 15116, null], [15116, 15219, null], [15219, 15775, null], [15775, 15990, null], [15990, 16348, null], [16348, 16535, null], [16535, 16963, null], [16963, 17507, null], [17507, 17838, null], [17838, 18394, null], [18394, 18567, null], [18567, 19223, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 19223, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 19223, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 19223, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 19223, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 19223, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 19223, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 19223, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 19223, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 19223, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 19223, null]], "pdf_page_numbers": [[0, 18, 1], [18, 291, 2], [291, 865, 3], [865, 1198, 4], [1198, 1698, 5], [1698, 2084, 6], [2084, 2235, 7], [2235, 2545, 8], [2545, 2905, 9], [2905, 3609, 10], [3609, 3660, 11], [3660, 4197, 12], [4197, 4257, 13], [4257, 4424, 14], [4424, 4783, 15], [4783, 5296, 16], [5296, 5909, 17], [5909, 6447, 18], [6447, 6763, 19], [6763, 6950, 20], [6950, 7302, 21], [7302, 8846, 22], [8846, 9026, 23], [9026, 9534, 24], [9534, 9853, 25], [9853, 10029, 26], [10029, 10820, 27], [10820, 11183, 28], [11183, 11693, 29], [11693, 11781, 30], [11781, 11918, 31], [11918, 12527, 32], [12527, 12555, 33], [12555, 12820, 34], [12820, 13150, 35], [13150, 13445, 36], [13445, 13858, 37], [13858, 13891, 38], [13891, 14314, 39], [14314, 14666, 40], [14666, 14983, 41], [14983, 15116, 42], [15116, 15219, 43], [15219, 15775, 44], [15775, 15990, 45], [15990, 16348, 46], [16348, 16535, 47], [16535, 16963, 48], [16963, 17507, 49], [17507, 17838, 50], [17838, 18394, 51], [18394, 18567, 52], [18567, 19223, 53]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 19223, 0.12723]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
6ec64ec9f6745e503ec5038a50c7d0383696ee23
F-CUBE FACTORY: A FUZZY OLAP SYSTEM FOR SUPPORTING IMPRECISSION Miguel Delgado1 Carlos Molina2∗ Lázaro Rodríguez-Ariza3 Daniel Sánchez1 M. Amparo Vila1 1.Dpt. Computer Science and A.I., School of Computer Science, University of Granada 2.Dpt. Computer Science, School of Computer Science, University of Jaén 3.Dpt. Financial Economy and Accounting, School of Management, University of Granada Email: mdelgado@decsai.ugr.es, carlosmo@ujaen.es, lazaro@ugr.es, daniel@decsai.ugr.es, vila@decsai.ugr.es ABSTRACT: The special needs of the OLAP technology was the main cause of the use of a multidimensional view of the data. As a result of the use of this technology in new fields of knowledge (e.g. medical data) and the integration of data from heterogeneous sources, it has become necessary for multidimensional models to support new needs. To model complex or nor well defined domains or to integrate data from semi/non-structured sources (e.g. Internet) or with incompatibilities in their schemata is complicated using crisp multidimensional models. In these situations we need a model able to manage imprecision in the structures and data as a result of the modelling and/or integration. In this paper we present an OLAP system based on a fuzzy multidimensional model that uses fuzzy logic. The use of fuzzy structures (hierarchies and facts) and the definition of the normal OLAP operations (roll-up, drill-down, dice, slice and pivot) enable the model to manage the imprecision of these situations, hiding at the same time the complexity to the user by means of user views. Keywords: Multidimensional Model, OLAP, Fuzzy Logic 1 INTRODUCTION Since the appearance of the OLAP technology ([5]) different proposals have been made to give support to the special necessities of this technology. In the literature we can see two different approaches. One of this is to extend the relational model to support the structures and operations typical of OLAP. The first one following this idea can be found in [12]. From then on, more proposes have appeared ([13]) and most of the present relational systems include extension to represent datucubes and operate over them. The other approach is to develop new models using a multidimensional view of the data. Many authors proposed model in this way ([1, 3, 4, 15]). In the early 70’s, the necessity of flexible models and query languages to manage the ill-defined nature of information in DSS is identified ([11]). Nowadays, the application of the OLAP technology to other knowledge fields (e.g. medical data) and the use of semi-structured (e.g. XML) and non-structured (e.g. plain text) sources introduce new requirements to the models. Now the systems need to manage imprecision in the data and facts) and the definition of the normal OLAP operations (roll-up, drill-down, dice, slice and pivot) enable the model to manage the imprecision of these situations, hiding at the same time the complexity to the user by means of user views. Nevertheless, these models focus the imprecision in the facts. Continuing using rigid hierarchies made very difficult to model some problems that can be translate into loss of information when we need to merge data from different sources with some incompatibilities in their schemata. What we present in this paper is an OLAP system based on a fuzzy multidimensional model that uses fuzzy relation to model the hierarchies as well as fuzzy facts ([7]). Merging information given by experts may improve the multidimensional schemata in some situations. To help in this task, the model allows to define the hierarchical relation using linguistic labels ([8]) that is more intuitive for the expert than using concrete values. In the next section we present the multidimensional model under F-CubeFactory system. In section 3 the system is presented. The last section is dedicated to conclusions and future work. 2 FUZZY MULTIDIMENSIONAL MODEL In this section we briefly introduce the fuzzy multidimensional model. A more detailed description can be found in [7, 8]. Here we only present the main concepts needed to understand the model implemented. 2.1 Fuzzy Multidimensional structure Definition 1 A dimension is a tuple $d = (l_i, \leq_d, l_\bot, l_\top)$ where $l_i = l_i, i = 1,..., n$ so that each $l_i$ is a set of values $l_i = \{c_{i1}, ..., c_{im}\}$ and $l_i \cap l_j = \emptyset$ if $i \neq j$, and $\leq_d$ is a partial order relation between the elements of $l$ so that $l_i \leq_d l_k$ if $\forall c_{ij} \in l_i \exists c_{kp} \in l_k / c_{ij} \subseteq c_{kp}$. $l_\bot$ and $l_\top$ are two elements of $l$ so that $\forall l_i \in l I l_\bot \leq_d l_i \leq_d l_\top$. We denote level to each element $l_i$. To identify the level $l_i$ of the dimension $d$ we will use $d.l$. The two special levels $l_\bot$ and We have the relation \( l_\top \) will be called base level and top level respectively. The partial order relation in a dimension is what gives the hierarchical relation between the levels. **Definition 2** For each dimension \( d \), the domain is \( \text{dom}(d) = \bigcup l_i \). In the example above the domain of the dimension \( \text{Age} \) is \( \text{dom} \text{(Age)} = \{1, ..., 100, \text{Young}, \text{Adult}, \text{Old}, \text{Yes}, \text{All} \} \). **Definition 3** For each \( l_i \), the set \( H_i = \{l_j/l_j \neq l_i \land l_j \leq d_l \land \neg \exists l_k \; l_j \leq d_k \leq d_l \} \) and we call this the set of children of the level \( l_i \). Using the same example, the set of children of the level \( \text{All} \) is \( H_{\text{All}} = \{\text{Group}, \text{Legal age}\} \). In all dimensions that we can define this set for the base level will be always the empty set, as you can see from the definition of set of children. **Definition 4** For each pair of levels \( l_i \) and \( l_j \) such that \( l_j \in H_i \), we have the relation \[ \mu_{ij} : l_i \times l_j \rightarrow [0, 1] \] and we call this the kinship relation. The degree of inclusion of the elements of a level in the elements of their parent levels can be defined using this relation. If we use only the values 0 and 1 and we only allow an element to be include with degree 1 by a unique element of its parent levels, this relation represents a crisp hierarchy. Following the example, the relation between the levels \( \text{Legal age} \) and \( \text{Age} \) is of this type. The kinship relation in this situation is \[ \mu_{\text{Legal age}, \text{Age}}(\text{Yes}, x) = \begin{cases} 1 & \text{if } x \in [18, 100] \\ 0 & \text{in other case} \end{cases} \] \[ \mu_{\text{Legal age}, \text{Age}}(\text{No}, x) = \begin{cases} 1 & \text{if } x \in [1, 17] \\ 0 & \text{in other case} \end{cases} \] If we relax these condition and we allow to use values in the interval \([0,1]\) without any other limitation, we have a fuzzy hierarchical relation. This allows represent several hierarchical relations in a more intuitive way. An example can be seen in the figure 2 where we present the group of ages according to linguistic labels. Furthermore, this fuzzy relation allows to define hierarchies in which there is imprecision in the relationship between elements in different levels. In this situation, the value in the interval shows the degree of confidence in the relation. **Figure 2** Kinship relation between levels \( \text{Group} \) and \( \text{Age} \) For each pair of levels \( l_i \) and \( l_j \) of the dimension \( d \) such that \( l_j \leq d_l \land l_j \neq l_i \), the relation \( \eta_{ij} : l_i \times l_j \rightarrow [0, 1] \) is defined as \[ \eta_{ij}(a, b) = \bigoplus_{l_k \in H_l} (\mu_{l_k}(a, c) \otimes \mu_{l_j}(c, b)) \quad \text{in other case} \] where \( \otimes \) and \( \oplus \) are a t-norm and a t-conorm, respectively, or operators from the families MOM and MAM defined by Yager [19], which include the t-norms and t-conorms, respectively. This relation is called the extended kinship relation. This relation gives us information about the degree of relation between two values in different levels inside the same dimension. To obtain this value, it considers all the possible paths between the elements in the hierarchy. Each one is calculated aggregating the kinship relation between elements in two consecutive levels using a t-norm. Then the final value is the aggregation of the result of each path using a t-conorm. As an example, we will show how to calculate the value of \( \eta_{\text{All}, \text{Age}}(\text{All}, 25) \). In this situation we have two different paths. Let see each one: - **All - Legal age - Age.** In the figure 3.a you can see the two ways to get to 25 from All going pass the level legal age. The result of this path is \((1 \otimes 1) \oplus (1 \otimes 0)\). - **All - Group - Age.** This is a situation very similar to the previous one. In the figure 3.b you can see the three different paths going through the level Group. The result of this path is \((1 \otimes 0.7) \oplus (1 \otimes 0.3) \oplus (1 \otimes 0)\). Now we have to aggregate these two values using a t-conorm to obtain the result. If we use the maximum as t-norm and the minimum as t-conorm, the result is \((1 \otimes 0.7) \oplus (1 \otimes 0.3) \oplus (1 \otimes 0)\) = \((1 \otimes 0) \oplus (0.7 \otimes 0.3) \oplus 0\) = \(1 \otimes 0.7 \oplus 0\). So the value of \( \eta_{\text{All}, \text{Age}}(\text{All}, 25) \) is 1. **Definition 5** For each pair of levels \( l_i \) and \( l_j \) of the dimension \( d \) such that \( l_j \leq d_l \land l_j \neq l_i \), the relation \( \eta_{ij} : l_i \times l_j \rightarrow [0, 1] \) is defined as \[ \eta_{ij}(a, b) = \bigoplus_{l_k \in H_l} (\mu_{l_k}(a, c) \otimes \mu_{l_j}(c, b)) \quad \text{in other case} \] where \( \otimes \) and \( \oplus \) are a t-norm and a t-conorm, respectively, or operators from the families MOM and MAM defined by Yager [19], which include the t-norms and t-conorms, respectively. This relation is called the extended kinship relation. **Figure 3** Example of the calculation of the extended kinship relation. a) path \( \text{All} - \text{Legal age} - \text{Age} \) b) path \( \text{All} - \text{Group} - \text{Age} \) Now we have to aggregate these two values using a t-conorm to obtain the result. If we use the maximum as t-norm and the minimum as t-conorm, the result is \((1 \otimes 0.7) \oplus (1 \otimes 0.3) \oplus (1 \otimes 0)\) = \((1 \otimes 0) \oplus (0.7 \otimes 0.3) \oplus 0\) = \(1 \otimes 0.7 \oplus 0\). So the value of \( \eta_{\text{All}, \text{Age}}(\text{All}, 25) \) is 1. **Definition 6** We say that any pair \((h, \alpha)\) is a fact when \( h \) is an \( m \)-tuple on the attributes domain we want to analyze, and \( \alpha \in [0, 1] \). The value \( \alpha \) controls the influence of the fact in the analysis. The imprecision of the data is manage by assigning an \( \alpha \) value representing this imprecision. When we operate with the facts, the aggregation operators have to manage this values in the calculations. The arguments for the operator can be seen as fuzzy bag due to they are a set of values with a degree in the interval \([0,1]\) than can be duplicated. For a characterization of fuzzy bag see [6]. The result of the aggregation has to be a fact too. So, in the fuzzy case the definition of aggregation operators is the following. **Definition 7** Been \(\tilde{B}(X)\) all the possible fuzzy bags defined using elements in \(X\), \(\tilde{P}(X)\) the fuzzy power set of \(X\), and \(D_x\) a numeric or natural domain, we define an aggregation operator \(G\) as a function \(G : \tilde{B}(D_x) \rightarrow \tilde{P}(D_x) \times [0,1]\). When we apply an aggregation operator, we resume the information of a bag of values into an unique value. Not always is possible to undo this operations. So if we want to undo operations that reduce the level of detail in a DataCube, we need something to prevent this problem. So we define the object history that stores the aggregation states of a DataCube. **Definition 8** An object of type history is the recursive structure \[ H^0 = \Omega \quad H^{n+1} = (A, l_b, F, G, H^n) \] where \(\Omega\) is the recursive clause, \(F\) is the fact set, \(l_b\) is a set of levels \((l_{b1}, \ldots, l_{bn})\), \(A\) is an application from \(l_b\) to \(F\) \((A : l_b \rightarrow F)\), \(G\) is an aggregation operator. Now we can define the structure of a fuzzy DataCube. **Definition 9** A DataCube is a tuple \(C = (D, l_b, F, \Lambda, H)\) such that \(D = (d_1, \ldots, d_n)\) is a set of dimensions, \(l_b = (l_{b1}, \ldots, l_{bn})\) is a set of levels such that \(l_{bi}\) belongs to \(d_i\), \(F = R \cup \emptyset\) where \(R\) is the set of facts and \(\emptyset\) is a special symbol, \(H\) is an object of type history, \(A\) is an application defined as \(A : l_{b1} \times \ldots \times l_{bn} \rightarrow F\), giving the relation between the dimensions and the facts defined. If \(\bar{c} = (c_1, \ldots, c_n)\) we have \(A(\bar{c}) = \emptyset\), this means that there isn’t defined a fact for this combination of values. **Definition 10** We say a DataCube is basic if \(l_b = (l_{1\perp}, \ldots, l_{n\perp})\) and \(H = \Omega\). ### 2.2 The Linguistic DataCube One possibility to extend and improve the hierarchies of multidimensional schemes is to incorporate the knowledge of experts about that hierarchies being usually given in a linguistic manner, but let us point out that in many cases the experts not only use fuzzy or imprecise classes in the nodes but they also express vaguely the hierarchical relations themselves. It is obvious that considering a hierarchy with linguistic assessment is equivalent to consider the kinship and the extended kinship relations to be linguistic. In the following \([0,1]\) will denote the set of fuzzy numbers on the unit interval. In this section we introduce the main concepts of linguistic hierarchies. For a deeper study see [8]. The first concept that have to be redefined is the kinship relation to consider the linguistic relations. **Definition 11** For each pair of levels \(l_i\) and \(l_j\) with \(l_i \in H_i\) there exist a relation \(\tilde{\mu}_{ij} : l_i \times l_j \longrightarrow [0,1]\) which is called the kinship relation. In the linguistic case, to aggregate the linguistic relations we need to use operators with a behavior similar to t-norms and t-conorms with some limitations (see [8]). Several definitions of linguistic aggregation operators may be found in the literature (see [10], [16]). We have defined another operator ([8]), that can act as t-norm or t-conorm over linguistic values according to a parameter, using a fuzzy extension of the OWA operator ([20]). **Definition 12** An aggregation operator \(A^\text{OM}_\beta\) is a function \(A^\text{OM}_\beta : [0,1]^n \rightarrow [0,1]\) defined as \[ A^\text{OM}_\beta(a_1, \ldots, a_n) = OWA_w^\text{OM}((a_1, \ldots, a_n)) \] where \(OWA_w^\text{OM}\) is a Fuzzy OWA operator with ranking method \(OM\), \(w\) a weight vector having \(w_1 = \beta\), \(w_n = 1 - \beta\), and \(w_i = 0\) for all \(i \in \{2, \ldots, n - 1\}\). Now we can defined the extended kinship relation. **Definition 13** For any levels \(l_i\) by \(l_j\) on dimension \(d\), such that \(l_i \leq l_j\) and \(l_j \neq l_i\), the extended kinship relation \(\tilde{\eta}_{ij} : l_i \times l_j \rightarrow [0,1]\) will be given by \[ \tilde{\eta}_{ij}(a, b) = \begin{cases} \tilde{\mu}_{ij}(a, b) & \text{if } l_j \in H_i \\ A^\text{OM}_\beta(P_{l_j}, \ldots, P_{l_n}) & \text{otherwise} \end{cases} \] where \(l_k \in H_k\) and \(P_{l_k} = A^\text{OM}_\beta(\delta_{c_1}, \ldots, \delta_{c_n})\|c_i \in l_k\), being \(\delta_{c} = A^\text{OM}_\beta(\tilde{\mu}_{sk}(a, c), \tilde{\eta}_{ik}(c, b))\). ### 2.3 Operations Once we have the structure of the multidimensional model, we need the operations to analyze the data in the DataCube. Over this structure we have defined the normal operations of the multidimensional model: - **Roll-up:** go up in the hierarchies to reduce the detail level. - **Drill-down:** go down in the hierarchies to increase the detail level. - **Dice:** project over the DataCube using a condition. - **Slice:** reduce the dimensionality of the DataCube. - **Pivot:** change the order of the dimensions. For a detail definition of the operation see [7, 8]. In these operations the hierarchical relations are very important. As an example, in the roll-up operation we need to know the facts related with the values at the detail level desired. This set is defined as follow. **Definition 14** For each value \(c_{ij}\) belonging to \(l_i\) we have the set \[ F_{c_{ij}} = \begin{cases} \bigcup_{l_k \in H_i} \left\{ F_c \mid c_{kp} \in l_k \wedge \tilde{\mu}_{k}(c_{ij}, c_{kp}) > 0 \right\} & \text{if } l_i \neq l_k \\ \{ h/h \in F \wedge \exists c' \in C(c') = c \} & \text{if } l_i = l_k \end{cases} \] where \(c' = (c_{1}, \ldots, c_{ij}, \ldots, c_{n})\). The set \(F_{c_{ij}}\) represents all the facts that are related to the value \(c_{ij}\). In the case of linguistic hierarchies we use the linguistic kinship relation. So, the definition of \( F_{c_{ij}} \) has to be changed to use it. **Definition 15** For each value \( c_{ij} \) belonging to \( l_r \) we have the set \[ F_{c_{ij}} = \left\{ \frac{\mu_{c_{ij}}(c_{ij})}{c_{jk}} \in I_k \cap \mu_{\pi_k}(c_{jk}) \neq 0 \quad \text{if } l_r \neq l_b \\ \{ h/h \in F \cap \exists \; \gamma(c) = h \} \quad \text{if } l_r = l_b \right. \\ \text{where } \gamma = (c_1, c_{ij}, ..., c_n). \quad \text{The set } F_{c_{ij}} \text{ represents all the facts that are related to the value } c_{ij}. \] **2.4 User view** We have presented a structure that manages imprecision by means of fuzzy logic. We need to use aggregation operators on fuzzy bags in order to apply some of the operations presented. Most of the methods previously documented give a fuzzy set as a result. As this situation can make the result difficult to understand and use in a decision process, we propose a two-layer model: one of the layers is the structure presented in the previous section; and the other is defined on this, and its main objective is to hide the complexity of the model and provide the user with a more understandable result. In order to do so, we propose the use of a fuzzy summary operator that gives a more intuitive result but which keeps as much information as possible. Using this type of operator, we shall define the **user view**. **Definition 16** Given a summary operator \( M \), we define the **user view** of a DataCube \( C = (D, l_b, F, A, H) \) using \( M \) as the structure \( C_M = (D, l_b, F_M, A_M) \) where \( A_M(a_1, ..., a_n) = M(A(a_1, ..., a_n)) \), \( F_M \) is the range of \( A_M \). We can define as many user views of a DataCube as the number of summary operators used. Therefore, each user can have their own user view with the most intuitive view of data according to their preferences by using a DataCube. As an example of this type of operator, we can use the one proposed in [2]. This operator proposes the use of the fuzzy number that best fits, in the sense of fuzziness, the fuzzy set or fuzzy bag. We can use more simple operators as the **weighted average**. As an example we’ll apply both operators to the fuzzy bag \{1/1, 1/2, 0.9/0.5, 0.8/2.3, 0.2/0.3, 0.1/2.5\}: - **Linguistic summary.** Using this operator, the result is \((1, 2, 0.5)\) which linguistic expression associated is “more or less between 1 and 2”. - **Weighted average.** In this situation, the value shown to the user is 1.4. As you can see, in both case the user get a more intuitive access to the results. To give a intuitive way to interpret the result is important, as shown by Codd et al. in the 11th OLAP product evaluation rule ([5]). Most of the times the user will understand better a graphic than a table with the results. Present systems use charts to show the result to the decisor. In our model, to provide a graphical way, is even more important due to the fact that to interpret fuzzy values is complicate even to experts in fuzzy logic. We propose two methods to represent fuzzy numbers in a graphical way as an user view. Both approaches are shown in Figure 4. In Figure 4.a the approach followed is to use a color gradient to represent the membership grade of the values. The other approach (Figure 4.b) consists in change the width of a bar to represent the membership. Both can be use to construct charts. An example is shown in Figure 5. This example represent fuzzy values related to crisp ones (the labels). In some situation, represent fuzzy values related to fuzzy labels can be interesting. Following the first approach we can do it. So, what we do is to aggregate the membership values in both axis, using a t-norm, and use the result to build the color gradient. Figure 6 shows an example of chart where the labels are defined using linguistic labels. **3 F-Cube Factory** The system is completely build using Java language and it was design keeping in mind future extension for the multidimensional model. Now the software implements three DataCubes models: - **ROLAP model:** the system can manage DataCubes using a relational database to store the DataCube and to obtain the data to build new DataCubes. - **MOLAP crisp model:** DataCubes are also stored using a purely multidimensional structure implemented in Java. - **MOLAP fuzzy/linguistic model:** this model implements the fuzzy and linguistic multidimensional model presented. It uses a MOLAP way to manage the fuzzy DataCubes. We can differentiate two main parts in the system: the server is the one that implements the main functionality, and the clients, which are the interface to the user to the server functionality trying to give a simple and intuitive access to the DataCube. In next section we present some details of each part of the system. 3.1 F-CubeFactory Server The server architecture is shown in Figure 7. The most important modules in the server are these: - DataCubes module: this module implements the three DataCube models previously mentioned. It gives a homogenous access to the multidimensional structure to the rest of the modules. One of the main functionalities is the queries. The efficiency is very important because OLAP systems have to give support for ad-hoc queries in a reasonable time. In the fuzzy DataCube this is even more important due to the fact that each query implies the aggregation of a great amount of kinship relations. To improve the efficiency the system precomputes the extended kinship relations from each level to the basic level. This task is carried out when building the fuzzy DataCube. A DataCube is built one time mean while we use the same DataCube for a lot of queries, so the time spent in aggregating the kinship relations is only taken when the user does not suffer the delay. In this module is included the user views for the fuzzy DataCubes. To add new user views to the server is very easy: you only need to extend a Java class and register in the server configuration. The calculation of a user view is only made the first time the system need the fact and stores it to be used the next times the system needs it. - Aggregation functions module: This module interact with the previous one when we want to change the detail level, which is translated in a query. It has implemented the normal function for crisp DataCubes (max, min, sum, average and count) and fuzzy ones, using an adaptation of Rundensteiner and Bic’s operators. **Definition 17** Been \( R \) an operator defined by Rundensteiner and Bic ([18]), and \( \tilde{F} \) a fuzzy bag over the facts. We define the operator \( G_R \) as \( G_R(\tilde{F}) = (R(\tilde{F}'), 1) \), where \( \tilde{F}' = \{\alpha/h \text{ such that } (h, \alpha) \in \tilde{F}\} \). Adding new aggregation function is as easy as in the case of user views. - Server API module: this module implement the API to access all the functionality in the server. This is the access point for the clients. 3.2 F-CubeFactory Client The main objectives of the client are: - The client has to be light enough to be use in a normal personal computer. - And the most important is that is has to implement an intuitive access to the server functionality. The client is web based, so the user only need to access to a web site using a normal web browser. Figure 8 shows the aspect of the user interface. The user only has to select the option needed to access to a DataCube without needing to know any DML or DDL language. The resulted DataCubes of queries are shown to the user using tables (Figure 9) and charts (Figure 5 was built using this functionality) for all type of DataCubes. 4 CONCLUSIONS AND FUTURE WORK In this paper we have presented an OLAP system prototype that implements a fuzzy multidimensional model to represent imprecision using fuzzy and linguistic hierarchies and fuzzy facts. The system has been developed keeping in mind to add new features in the future. Now we are working on OLAP mining over the multidimensional model presented. The next step is to implement in F-CubeFactory the results of this research and to proof the system using a real case. REFERENCES
{"Source-Url": "http://www4.ujaen.es/%7Ecarlosmo/pub/2005IFSA.pdf", "len_cl100k_base": 6919, "olmocr-version": "0.1.50", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 24558, "total-output-tokens": 8657, "length": "2e12", "weborganizer": {"__label__adult": 0.0003376007080078125, "__label__art_design": 0.0006680488586425781, "__label__crime_law": 0.0007061958312988281, "__label__education_jobs": 0.003635406494140625, "__label__entertainment": 0.0001080036163330078, "__label__fashion_beauty": 0.0002092123031616211, "__label__finance_business": 0.0022106170654296875, "__label__food_dining": 0.0004832744598388672, "__label__games": 0.0007176399230957031, "__label__hardware": 0.0010471343994140625, "__label__health": 0.0009775161743164062, "__label__history": 0.0005011558532714844, "__label__home_hobbies": 0.0002073049545288086, "__label__industrial": 0.001873016357421875, "__label__literature": 0.00048065185546875, "__label__politics": 0.0004153251647949219, "__label__religion": 0.0005221366882324219, "__label__science_tech": 0.42578125, "__label__social_life": 0.0002033710479736328, "__label__software": 0.0660400390625, "__label__software_dev": 0.491943359375, "__label__sports_fitness": 0.0002162456512451172, "__label__transportation": 0.0006380081176757812, "__label__travel": 0.0002627372741699219}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28730, 0.0198]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28730, 0.73391]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28730, 0.83104]], "google_gemma-3-12b-it_contains_pii": [[0, 4819, false], [4819, 11143, null], [11143, 17094, null], [17094, 21963, null], [21963, 24804, null], [24804, 28730, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4819, true], [4819, 11143, null], [11143, 17094, null], [17094, 21963, null], [21963, 24804, null], [24804, 28730, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28730, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28730, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28730, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28730, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28730, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28730, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28730, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28730, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28730, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28730, null]], "pdf_page_numbers": [[0, 4819, 1], [4819, 11143, 2], [11143, 17094, 3], [17094, 21963, 4], [21963, 24804, 5], [24804, 28730, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28730, 0.0]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
c29e2ae0ae6420dcf0468bf2e8e0cea0bfc30cde
ufthesis A \LaTeX{} Class to Format Theses and Dissertations at the University of Florida (Short Documentation) Class version v2.0c of 2002/09/20 Ron Smith http://www.ufthesis.ece.ufl.edu ufthesis@ufthesis.com Documentation updated on 2002/09/20 Abstract This class file should help users format their thesis or dissertation according to the guidelines established by the Editorial Office of the Graduate School at the University of Florida [1]. While this template, like all others, is not approved by the Graduate School, it should help most users satisfy the formatting requirements. Use of the ufthesis document class does not guarantee acceptance of your document by the Graduate School. Several people have used this document class, and have successfully made it through the Editorial Office review cycle. This documentation describes the use of the ufthesis class, and any custom commands provided by the document class. Should you be interested in the description of the underlying \LaTeX{} coding, please see the full version of the documentation. *Please contact the author regarding any bugs or possible modifications to the class file or its documentation. ## Contents 1 Introduction .................................................. 3 2 Verifying Page Alignment . . . .................................. 3 2.1 The proper way to adjust page alignment ................. 4 2.2 An easy method to adjust page alignment ................. 4 3 Usage .................................................................. 5 4 The document class options .................................. 5 4.1 Choosing the font size .................................. 5 4.2 Heading Styles: Bold or Underlined .................. 6 4.3 Page Numbering Placement .............................. 6 4.4 Draft .................................................... 6 4.5 Control of Justification ................................. 6 4.6 Hanging Indents in Table of Contents ................. 6 4.7 Disabling Typesetting Penalties ....................... 7 5 Setting the title, author's name, degree type, etc. .... 7 6 The Thesis “Frontmatter” ....................................... 8 6.1 Initial Page Layout and Numbering .................. 8 6.2 Title Page .............................................. 8 6.3 Copyright Page ......................................... 8 6.4 Dedication .............................................. 8 6.5 Acknowledgments ....................................... 8 6.6 Table of Contents, List of Tables and Figures .... 8 6.7 Abstract .................................................. 8 7 The Thesis “Mainmatter” ....................................... 9 7.1 Changing Page Layout and Numbering ................. 9 7.2 Chapter Titles ............................................ 9 7.3 Sectional Headings ..................................... 9 7.4 Captions for Tables and Figures .................... 9 8 The Thesis “Backmatter” .................................... 10 8.1 Appendix or Appendices ................................. 10 8.2 List of References .................................... 10 8.3 List of References and the \natbib\ package .......... 11 8.4 Biographical Sketch .................................. 11 8.5 Disabling Page Numbering .............................. 11 8.6 Signature Page .......................................... 11 8.7 General Audience Abstract ........................... 12 1 Introduction The idea of this document class is to input the standard \LaTeX\ report class and make changes as necessary to meet the Graduate School requirements. The following packages, in addition to those included with the base distribution of \LaTeX\, must exist on your system; the dates in parentheses give the dates of the oldest version this class was tested with. Since they are all standard packages, they are readily available at any CTAN (Comprehensive \TeX\ Archive Network) site. A convenient way to access the local CTAN site is via browsing http://www.cise.ufl.edu/ftp/tex/help/Catalogue/alpha.html - \texttt{setspace} (2000/12/01) [4]. - \texttt{ulem} (2000/05/26) [5]. You can use \texttt{\uline{⟨text⟩}} to underline ⟨text⟩. The documentation of the \texttt{ulem} package provides more information. - \texttt{sectsty} (1999/04/12) [6]. Since underlining in section headings is rather nontrivial, the \texttt{sectsty} package was used to manipulate the formatting of sectional headings. - \texttt{ragged2e} (1999/06/08) [7]. Since the Editorial Office prefers ragged right justification (to avoid hyphenation at the end of lines), we use this package to handle justification appropriately. - \texttt{everyset} (1999/06/08) [8]. Required by the \texttt{ragged2e} package. 2 Verifying Page Alignment . . . Before using this document class for the final printing of your thesis (the one you submit to the Graduate School), it is important to verify the proper page alignment from the DVI file to the final output. Otherwise, the margins of the printed output may not agree with those required by the Editorial Office. The following two subsections discuss two methods that can be used to ensure proper page alignment. 2.1 The proper way to adjust page alignment The base LATEX distribution comes with (since 1994) a special file that can be used to verify/configure the TEX system for proper page alignment. LATEX the file testpage.tex, using the letterpaper and single-sided options when prompted. Process the testpage.dvi (using dvips) and send the output to the printer exactly as you plan to do for your final submission. The result should be a printed page that clearly defines the edges of the text area. Use a ruler to measure the left and top margins on this page. If they are not one inch, then your TEX system requires some adjustment. Tomas Rokicki’s dvips program uses a configuration file called config.ps for site-wide control of the printing process. If different printers exist, then each printer may have a corresponding config.printer file, where printer is the actual printer name. To adjust the paper alignment, you must edit the appropriate configuration file. In its original configuration, the configuration file contains a line “O 0pt,0pt.” The “0pt,0pt” parameters control the alignment of the left and top edges of the printed page. Just change the line that contains “O 0pt,0pt” to whatever left and top offsets you require. Note that 1 pt is defined to be 1/72nd of an inch. So, you have very fine control over the paper alignment. This process may be recursive, in that you might have to modify the configuration file and then process testpage.tex again. Continue this process until you have the proper page alignment. Please note that this issue relates to properly configuring your TEX system, and is not just peculiar to the ufthesis documentclass. 2.2 An easy method to adjust page alignment If you have a TEX implementation on your own personal computer, then the method described in the previous subsection is the preferred method. However, if you are using a TEX system on a network, or computer system maintained by the University, then you may not be able to modify the appropriate configuration files. Instead of modifying configuration files, you can add two commands to your thesis file to adjust the page alignment. Similar to the testpage.tex file, the ufthesis documentclass is distributed with a file called ufinalign.tex. LATEX the ufinalign.tex file and process it through dvips, just as you plan to process your thesis for the final submission. That is, use the same commands and the same printer that you will use for your final printout. One of the boxes on the page displays the area in which the main text of your thesis will be typeset. Therefore, the left and top margins to this box should be 1.5 inches and 1 inch, respectively. The ufalign.tex file contains two commands, \addtolength{\hoffset}{0pt} and \addtolength{\voffset}{0pt}, that can be used to adjust the placement of the margins. Use either points (1pt = 1/72nd of an inch) or inches to adjust the margins. To increase the left margin by a tenth of an inch, change the \hoffset command to \addtolength{\hoffset}{0.1in}. To decrease the left margin by one-eighth of an inch, change the \hoffset command to \addtolength{\hoffset}{-0.125in}. Modify the \voffset command to adjust the top margin of the page. Modify these parameters, and re-process the ufalign.tex until the page margins are 1.5 inches and 1 inch as required. Once you have determined the final values of these parameters, for proper page alignment, copy the \addtolength{... commands into your thesis file. See the example thesis file ufsample.tex for an idea of where these commands should be pasted. 3 Usage An example of how to use this document class is provided in ufsample.tex. Details of how to compile the example are provided in a later section. Use this class file the same way as the report class, by putting \documentclass{ufthesis} at the beginning of the \LaTeX file. There are only a few options available with this document class, which will be described in the following sections. Due to the requirements of the Graduate School, this document class does not support two-sided printing or two-column page layout. 4 The document class options 4.1 Font Size Options: 10pt, 11pt or 12pt These are the font sizes that are available in the standard \LaTeX report document class. Most likely, the 12pt option should be used for the final typeset version of the document. Obviously, only use one of these options in the \documentclass command. The 12pt option is selected by default if none of these options are specified in the \documentclass command. 4.2 Bold By default, all headings in the document are underlined using the \texttt{ulem} and \texttt{sectsty} packages. The Graduate School requirements have recently been modified such that a bold typeface may be used instead. By using this option, all of the headings (below the chapter level) are typeset using a bold typeface. 4.3 Page Numbering Option: \texttt{CPage} By default, on the main text pages, the page number is displayed right-justified at the top of the page. When the \texttt{CPage} option is used, the page number is displayed centered at the top of the page. 4.4 Draft This option has been defined to facilitate switching the page numbering and style for the front matter and main text as appropriate. Using the \texttt{Draft} option, the page numbering starts at 1 and increments throughout the document. When the \texttt{Draft} option is used, page numbers are displayed using Arabic numerals. However, the page numbers are not displayed in either the abstract or the biographical sketch. This option is useful for general purpose work, and for generating the extra copies of the abstract and biographical sketch that are required by the Graduate School at final submission. 4.5 Justification Option: \texttt{Justify} By default, the thesis is typeset using ragged right justification. Ragged right justification helps avoid excessive word hyphenations at the end of each line. When this option is used, ragged right justification is disabled, and the thesis is typeset using full justification. According to feedback from the Editorial Office, ragged right is the preferred justification method. The \texttt{ragged2e} and \texttt{everysel} packages are used to enable the ragged right justification. 4.6 Hanging Indents: \texttt{NoTocHang} According to the Graduate School requirements, the second line of a heading in the table of contents or list of figures/tables must be indented. This is the default behaviour of the document class, as a hanging indent of width $\texttt{\RS@TOChdent}$ (relative to the first line of text) will be present on the second line of any entry in these lists. By using the \texttt{NoTocHang} option, then for any headings that are numbered, the second line will be indented such that it aligns with the text of the first line. 4.7 Typesetting Penalties: nopenalties By default, several \TeX parameters are set such that widows and orphans (single lines starting/ending a paragraph at the top or bottom of the page) do not occur. By using this option, one can easily view the effects of using the default \TeX values for these parameters. Then, if a widow or orphan occurs, a manual page break can be forced at the appropriate point in the source code to remove the widowed or orphaned line. 5 Setting the title, author’s name, degree type, etc. The document preamble is the part that occurs before the \begin{document} statement. Throughout the document, information about the author, the title and such is required. The following commands are used in the document preamble to define all of the required text strings that are used to personalize the document. \begin{itemize} \item \SetTitle{text} This is the title as it appears on both the title page and in the abstract. The title must be entered using all upper-case letters. Line-breaks can be entered in the title by using the \\ command. \item \SetFullName{text} The author’s name, using capital letters where appropriate. \item \SetThesisType{text} This should be the word “Thesis” or “Dissertation”. \item \SetDegreeType{text} Something like “Master of Science” or “Doctor of Philosophy”. \item \SetGradMonth{text} The month in which the degree is conferred. “May” or “December” seem to be popular choices. \item \SetGradYear{text} The year in which the degree is conferred. \item \SetDepartment{text} The author’s department name, which appears in in the abstract. \item \SetChair{text} The chairperson’s name. \end{itemize} Usage of a command, for example, \SetTitle{ABSTRACT GEOMETRY} defines a command without the Set part (in this example \Title printing “ABSTRACT GEOMETRY”) that is used internally but can also be used throughout the text. 6 The Thesis “Frontmatter” In this section, we present the commands used to add all pages prior to the first chapter of the thesis. Some of these commands are required while some of them are optional. 6.1 Initial Page Layout and Numbering \frontmatter This command is required, and should be used immediately after the \begin{document} command, to initialize several page numbering and layout parameters. 6.2 Title Page \maketitle The command \maketitle formats the title page. 6.3 Copyright Page \makecopyright The optional copyright page can be inserted by using the command \makecopyright. 6.4 Dedication \dedication The optional dedication can be inserted by using the command \dedication{text}, where text is the contents of the dedication. 6.5 Acknowledgments \acknowledge In order to format the optional acknowledgments, use the command \acknowledge{text}. The title “ACKNOWLEDGMENTS” is automatically added to the table of contents. This title may be modified by renewing \acknowledgname. 6.6 Table of Contents, List of Tables and Figures \tableofcontents, \listoftables, \listoffigures These lists are generated with the commands \tableofcontents, \listoftables and \listoffigures. The titles of these lists may be changed by renewing the \contentsname, \listtablename, and the \listfigurename macros, respectively. 6.7 Abstract \abstract This command, which actually is an environment, sets up the required text for the abstract. The proper use is: \begin{abstract} The actual abstract text \end{abstract}. 7 The Thesis “Mainmatter” In this section, we describe the commands used to format the main chapters of the thesis. Please refer to the sample thesis file ufsample.tex for examples of how these commands are used. 7.1 Changing Page Layout and Numbering Before the main body of the document, use the \mainmatter command to control the page numbering. Most likely, this command should be used immediately after the \abstract command. 7.2 Chapter Titles Use the \chapter command to start a new chapter in the document. If the boolean SetDSpace is true, then the chapter will be typeset using double-spacing. In general, the class file sets the SetDSpace boolean appropriately so that the selection of single-spacing or double-spacing occurs transparent to the user. Note that the chapter title may extend over more than one line by using the following form: \chapter[Title Line 1 \protect\newline Title Line 2]{Title Line 1 \ Title Line 2} 7.3 Sectional Headings Use the \section command for the first-level subheadings. Use the \subsection command for the second-level subheadings. Since it may be difficult to distinguish third-level subheadings from second-level subheadings, it is suggested that the \paragraph command is used for all third-level subheadings. Note that the text of the heading may extend over more than one line. In this case, use the command in the same form as what is shown above for the \chapter command. 7.4 Captions for Tables and Figures According to the Graduate School, captions must be placed above tables and below figures. Normally \LaTeX uses the lengths \abovecaptionskip and \belowcaptionskip to determine the amount of white-space above or below a caption. The figure and table environments have been slightly modified such that the different spacing is defined around the \caption command. For figures, the lengths \abovefigcaptskip and \belowfigcaptskip determine the amount of white-space around the figure caption. Likewise, the lengths \abovetabcaptskip and \belowtabcaptskip control the spacing around table captions. 8 The Thesis “Backmatter” The following commands are used to control the appearance and content of the end of the thesis (appendices, signature pages and such). You might have to tailor some of these commands to suit your college requirements. 8.1 Appendix or Appendices \chapter* \appendix \clearpage \addcontentsline{toc}{extrachapter}{APPENDIX\protect\hspace{2.0em}TITLE} \chapter*{APPENDIX \ TITLE} If however, multiple appendices are to be included, the following sequence of commands should be used: \appendix \clearpage \addcontentsline{toc}{extraentry}{APPENDICES} \chapter{TITLE OF APPENDIX A} Blah Blah Blah ...... \chapter{TITLE OF APPENDIX B} 8.2 List of References The \bibliography environment has been modified to start a new chapter with a title that defaults to REFERENCES, (see \bibname). Entries in the bibliography are typeset single-spaced with a double-space between individual entries. While items may be manually entered into this environment, it is strongly suggested that the Bib\TeX program \cite{10} be used to maintain the list of references. If the Bib\TeX program is used, then a need arises for a style file in order to format the entries of the bibliography environment. This document class comes with two bibliography style files \texttt{uffull.bst} and \texttt{ufinit.bst} that may be useful. They are just slightly modified versions of the style file used by the IEEE\footnote{Institute of Electrical and Electronics Engineers, Inc.} for its transaction papers. The references are numbered and listed in order of citation. Using the ufull bst style, the author’s names will be listed as given in the database (.bib) file. Using the ufinite bst style, the author’s names will be listed using initials (for first and middle) followed by the surname. These files have been provided as a convenience to users (but have not been approved by the Editorial Office). However, other style files are available from CTAN, or custom files may be created by using the custom-bib package [15]. 8.3 List of References and the natbib package The natbib package [16] can be used with the uftesis documentclass. The natbib package unfortunately modifies some of the commands defined by the uftesis documentclass. Therefore, an extra step must be included to correct these modifications. A file named ufnatbib.cfg is distributed with the uftesis documentclass that is used for this purpose. The contents of the ufnatbib.cfg file are just those commands that must be redefined after natbib is loaded. Just rename the ufnatbib.cfg file to natbib.cfg, and place it in the directory with your main thesis file. When the natbib package is loaded, it will automatically read the natbib.cfg file to make the necessary corrections. 8.4 Biographical Sketch \biography{text} Use the \biography{text} command to typeset the text in the required biographical sketch. The title “BIOGRAPHICAL SKETCH” is automatically added to the table of contents, but this may be changed by renewing the \biographyname macro. 8.5 Disabling Page Numbering \backmatter Use the \backmatter command to turn off the display of page numbers on the remaining pages in the document. This command should be used before the typesetting of the signature page. 8.6 Signature Page \CertPar \SubmitPar These commands are used to help simplify the creation of the signature page. Since the number of committee members is different for a thesis and a dissertation, a custom signature page must be created. These commands as given, are set up for a signature page according to the College of Engineering guidelines [1]. You most likely will have to modify the definitions of these commands for your individual requirements. Both commands have two arguments, with the first one being optional. A typical use of these commands is given below. ```latex {\setlength{\parskip}{0.15in}\ \CertPar{\Chair, Chair \newline Associate Professor of Electrical and \newline Computer Engineering}\ \CertPar{Another Name \newline Professor of Electrical and Computer \newline Engineering}\ \newpage\ \SubmitPar{Pramod P. Khargonekar \newline Dean, College of Engineering}\ {Winfred M. Phillips \newline Dean, Graduate School}}\ ``` Note that the \parskip above the first \CertPar is used to set the spacing between the individual certification blocks. The spacing between The end of the “I certify that I have read” statement and the actual signature line is controlled by the optional argument of the \CertPar command (likewise for the \SubmitPar command). Change this spacing as desired by using the following version of the command: ``` \CertPar[0.5in]{Name \newline Department \newline Second Line of Department} ``` ### 8.7 General Audience Abstract The general audience abstract should be no more than 150 words and should be written to communicate in clear and effective, nonspecialized language the contributions of the work to the state of Florida, the nation, society in general, and/or the discipline. The following structure can be used to typeset the general audience abstract. ``` \begin{simpleenv}{}{}{}{}\pagestyle{empty}\begin{flushleft}TITLE OF THE DISSERTATION\*{\BaseDiff\baselineskip}\FullName\%(352) xxx--xxxx\%(Department of \Chair\%Degree: \DegreeType\%Graduation Date: \GradMonth\%GradYear\%\end{flushleft}\\GoDouble%Enter the text of the general audience abstract here.\end{simpleenv} ``` 12 9 Special Commands The commands described in this section are used to aid in the commenting of the source file, and also to do some special formatting. These commands are optional, in that they are not required in order to format the document. 9.1 Adding Margin Notes Use the command \NOTE{text} to display the argument text in a special box with the word NOTE displayed in the page margin. If the boolean \ShowNotes is true, then the note will be displayed on the typeset page. Otherwise, the note will be ignored. The idea is to use the \NOTE command to write personal notes while working on the draft, that can easily be removed for the final typeset copy. 9.2 Automatically Generated Indices Several commands have been included in this document class file to facilitate the creation of automatically generated lists with the MakeIndex program [11, 12]. These commands are based on the multind.sty package by F. W. Long [3, chapter 12], which allows the creation of multiple indices in the document. For example, two indices may be used to generate a list of abbreviations and a list of mathematical symbols. Please note that \TeX has a limited number of files that can be written to at one time, so an infinite number of indices may not be used. These commands are meant as an alternative for other index packages (like makeidx.sty or showidx.sty) which only support one index file in the document. Use this command in the document preamble to enable the creation of an index file. If only one index is to be created, then just use the \makeindex command. However, if there are to be more than one index, or it is desirable to name the index file something besides the default, use the following command \makeindex[filename] A unique filename for each index must be assigned, so a \makeindex[] command should exist for each index, in the document preamble. Issue this command at the point in the document where the index is to be typeset. Typically, this will be after the lists of figures and tables (if included). The command creates a new chapter and sets up the initial page formatting for the list to be generated. The form of the \printindex command directly corresponds to the form of the \makeindex command used in the document. The \printindex com- mand has four arguments, with one being optional. The two ways to use this command are: \printindex{INDEX TITLE}{INDEX TITLE}{text} \printindex[filename]{INDEX TITLE}{INDEX TITLE}{text} The second and third arguments in the \printindex command correspond to what is eventually placed into a chapter command \chapter{2nd arg}{3rd arg}. The fourth argument is used if some additional text is to be included between the title of the index and the actual listing of the index. This form of the command may be useful to provide a brief description of the contents/purpose of the index. This is the command that is used to actually define what is going to be added to the index list. As with the other indexing commands there are two forms that can be used, depending on how the \makeindex command was issued. For generating only one index file, use the command shown below \index{text to add to index} If there are more than one index file, or if a custom filename was used for the index, then the following form of the command must be used: \index[filename]{text to add to index} Of course, for each index file that is created, a run of the MakeIndex program will be required. How to actually format the index using the MakeIndex program, and customizing MakeIndex style files, is discussed in [3], and is therefore considered to be beyond the scope and intent of this document. However, along with the ufthesis document class file there are two MakeIndex style files (ufpage.ist and ufnpage.ist) created that allow one to typeset an index with/without page numbers in a format that is very similar to what is typeset in the table of contents. Assuming that one has issued the command \makeindex[filename] in the document preamble, then after running the document through \LaTeX, one can format the index file by issuing on of the following commands. makeindex -s ufpage.ist filename makeindex -s ufnpage.ist filename Of course, an extra run of the document through \LaTeX will be required to actually typeset the formatted index. There is one important point to remember; just like generating the cross-references in a \LaTeX document may require \LaTeXing the document several times, the same might be true for generating the index files. If the table of contents file changes between two runs of \LaTeX on the document, another run of the MakeIndex program may be required (followed by another run of \LaTeX) in order to get the page numbers correct in the typeset index file. 10 Changing the Code ... It might be necessary to modify some of the commands defined in the \texttt{ufthesis} \texttt{documentclass}. One possible reason for modification would be changing the signature page commands to your specific requirements. However, you do NOT have to (and SHOULD NOT!) edit the \texttt{ufthesis.cls} file directly. Create a file called \texttt{ufthesis.cfg}, which contains any commands that you want to add to or modify in the \texttt{ufthesis} \texttt{documentclass}. Place this file in the same directory as your main thesis file. The \texttt{ufthesis} \texttt{documentclass} will automatically load this configuration file. See the sample thesis file, \texttt{ufsample.tex}, and the sample configuration file, \texttt{ufmod.cfg}, for more details on this subject. 11 An Example Thesis File In order to typeset the example file, the following files should exist: \texttt{ufthesis.cls}, \texttt{ufsample.tex}, \texttt{ufpage.ist} and \texttt{ufnpage.ist}. All of these files are generated from \texttt{ufthesis.dtx} so if a file is missing, complain to someone who knows where/how to get/process the \texttt{ufthesis.dtx} file. In addition, dependent on the \LaTeX{} system in use, the files \texttt{setspace.sty}, \texttt{ulem.sty}, \texttt{ragged2e.sty}, \texttt{everysel.sty} and \texttt{sectsty.sty} may also be required. These files (and those generated by \texttt{ufthesis.dtx}) may have to be placed in certain directories, or certain environment variables defined, such that they are seen by \LaTeX{} and the MakeIndex programs. Assuming that all of the files listed above exist, the following commands can be used to typeset the example thesis. \begin{verbatim} latex ufsample makeindex -s ufnpage.ist keylist makeindex -s ufpcont.ist mathlist latex ufsample latex ufsample makeindex -s ufnpage.ist keylist makeindex -s ufpcont.ist mathlist latex ufsample latex ufsample \end{verbatim} 12 Acknowledgments A special thanks to the authors of the \texttt{setspace}, \texttt{ulem}, \texttt{sectsty}, and \texttt{ragged2e} packages, for without these packages generating this class file would have been much more difficult. Former students Ali Almutairi, Brad Rainbolt, Shannon Fields and Philip McGoff deserve mentioning, as they volunteered to try out this document class while writing their dissertations/proposals. Bernd Schandl (Clemson University) was kind enough to share his work on his thesis package, and also provided some elegant solutions that were used in this document class. Dr. Brett Presnell, of the Department of Statistics, provided some useful suggestions for improving the quality of the document class. Walda Metcalf and Rhonda Riley, both of the Editorial Office, have been very helpful in providing much needed feedback. Without the assistance of Dave Blackman (the Electrical and Computer Engineering Department), the distribution and maintenance of the \texttt{ufthesis} documentclass would be, at best, extremely difficult. \section*{References} [2] Leslie Lamport, Frank Mittelbach and Johannes Braams, \texttt{classes.dtx}. This file is part of the base \texttt{LATEX} distribution. [6] Rowland McDonnell, \texttt{sectsty.sty}, version v2.0.1 1999/04/12 A \texttt{LATEX} package used to manipulate formatting of sectional headings. [7] Martin Schröder, \texttt{ragged2e.sty}, version v1.02 1999/06/08. A \texttt{LATEX} package that is used for ragged right justification with control over hyphenation. [8] Martin Schröder, \texttt{everysel.sty}, version 1.03 1999/06/08. Required by \texttt{ragged2e.sty}, controls interword spacing along with font changes. [10] Oren Patashnik, \texttt{BibLATEXing}. Documentation for general \texttt{BibLATEX} users, February 1988. The \texttt{BIBTEX} text of this document is included with the \texttt{BibLATEX} distribution. [12] Leslie Lamport, *MakeIndex*, An Index Processor for \LaTeX, 1987. The \LaTeX text of this document is included in the \texttt{makeindex} software distribution. [13] Johannes Braams, David Carlisle, Alan Jeffrey, Leslie Lamport, Frank Mittelbach, Chris Rowley, and Rainer Schöpf, \texttt{ltfloat.dtx}. This file is part of the base \LaTeX2\epsilon distribution. [14] Johannes Braams, David Carlisle, Alan Jeffrey, Leslie Lamport, Frank Mittelbach, Chris Rowley, Tobias Oetiker, and Rainer Schöpf, \texttt{ltsect.dtx}. This file is part of the base \LaTeX2\epsilon distribution. [15] Patrick W. Daly, \texttt{custom-bib}. A \LaTeX2\epsilon package that can be used to create custom “bst” (\BibTeX formatting) files. The “bst” file controls the formatting of the references in the bibliography list. [16] Patrick W. Daly, \texttt{natbib.sty}, version 7.0a 2000/07/24. A \LaTeX2\epsilon package that acts as a general, all-purpose citation-style interface. This can be used to change how references are displayed in the main matter of the thesis.
{"Source-Url": "http://www.ufthesis.ece.ufl.edu/Files/shortdoc.pdf", "len_cl100k_base": 7578, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 36401, "total-output-tokens": 8740, "length": "2e12", "weborganizer": {"__label__adult": 0.0005421638488769531, "__label__art_design": 0.003612518310546875, "__label__crime_law": 0.0006933212280273438, "__label__education_jobs": 0.35400390625, "__label__entertainment": 0.0003437995910644531, "__label__fashion_beauty": 0.00047707557678222656, "__label__finance_business": 0.00146484375, "__label__food_dining": 0.00047707557678222656, "__label__games": 0.0013608932495117188, "__label__hardware": 0.0012063980102539062, "__label__health": 0.0006165504455566406, "__label__history": 0.00153350830078125, "__label__home_hobbies": 0.0007271766662597656, "__label__industrial": 0.0006241798400878906, "__label__literature": 0.0026073455810546875, "__label__politics": 0.0005335807800292969, "__label__religion": 0.0010318756103515625, "__label__science_tech": 0.025543212890625, "__label__social_life": 0.0008983612060546875, "__label__software": 0.2249755859375, "__label__software_dev": 0.375244140625, "__label__sports_fitness": 0.0004258155822753906, "__label__transportation": 0.0004322528839111328, "__label__travel": 0.0006351470947265625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33610, 0.03749]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33610, 0.47824]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33610, 0.83346]], "google_gemma-3-12b-it_contains_pii": [[0, 1177, false], [1177, 3512, null], [3512, 5125, null], [5125, 7747, null], [7747, 9798, null], [9798, 12090, null], [12090, 13990, null], [13990, 15529, null], [15529, 17596, null], [17596, 19171, null], [19171, 21379, null], [21379, 22995, null], [22995, 25266, null], [25266, 27752, null], [27752, 29873, null], [29873, 32339, null], [32339, 33610, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1177, true], [1177, 3512, null], [3512, 5125, null], [5125, 7747, null], [7747, 9798, null], [9798, 12090, null], [12090, 13990, null], [13990, 15529, null], [15529, 17596, null], [17596, 19171, null], [19171, 21379, null], [21379, 22995, null], [22995, 25266, null], [25266, 27752, null], [27752, 29873, null], [29873, 32339, null], [32339, 33610, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33610, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33610, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33610, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33610, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33610, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33610, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33610, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33610, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33610, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33610, null]], "pdf_page_numbers": [[0, 1177, 1], [1177, 3512, 2], [3512, 5125, 3], [5125, 7747, 4], [7747, 9798, 5], [9798, 12090, 6], [12090, 13990, 7], [13990, 15529, 8], [15529, 17596, 9], [17596, 19171, 10], [19171, 21379, 11], [21379, 22995, 12], [22995, 25266, 13], [25266, 27752, 14], [27752, 29873, 15], [29873, 32339, 16], [32339, 33610, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33610, 0.0]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
1af809a862a6d6fa4d5744a62301b0cad6715a1a
Security Architecture in \textit{Gaia} \footnote{This research is supported by the National Science Foundation grant NSF 98-70736} Prashant Viswanathan, Binny Gill and Roy Campbell University of Illinois, Urbana-Champaign IL, USA \textbf{Abstract.} Ubiquitous computing promotes physical spaces with hundreds of computational devices extending the user’s view of the computational environment beyond the physical limitations of a traditional distributed system. Gaia OS is a middleware operating system that makes the realization of such spaces a possibility. In any such environment Security and Privacy are primary concerns and mechanisms to solve them have to be built into the very core of the system so that their deployment is acceptable and feasible. In this paper we describe a security architecture for Gaia OS and Active Spaces. Our architecture provides dynamism and flexibility to manage the security concerns in such a computing systems. 1 Introduction Security has always been a primary concern in any computing model. This is particularly true of ubiquitous computing environments where resources and information are shared on a very large scale. Sharing of resources by users in a distributed system makes sense only when it is done in a controlled manner. Security issues are even more important in a ubiquitous computing environment as contextual information such as a user’s location constitute an integral part of the computing model. The proliferation of ubiquitous computing makes the construction of Active Spaces a reality. Active Spaces will enable users to leverage the computing power of a large number of computing devices to enhance user productivity. Active Spaces are a distributed system in which a user exhibits “mobility”. With the number of Ubiquitous Computing applications steadily increasing we need an application framework to facilitate their construction and deployment. For any successful deployment of an Active Space in the real world we believe that the underlying framework should have the security mechanisms built into its core. In Gaia OS we have built in a set of powerful security mechanisms into the core of the system. A computing environment for an Active Space needs several security measures to ensure privacy of users and for protecting users and resources from the malicious intent of other users and resources. There are various security challenges in such a system that include Authentication, Access Control, Privacy of Location and Key Distribution. In an Active Space users can authenticate themselves in multiple ways. Access Control is required for files and services, and the system has to provide mechanisms by which users can secure their privacy. Scalability is another major issue in such a system. Users and programs also need mechanisms for controlled amplification of rights and delegation of authority. In Gaia OS we use Credentials as a basis for Authentication and Access Control to support scalability and multiple administrative domains. We use authentication mechanisms, involving public/private keys and symmetric keys. Roles and Attributes in combination with a rich set of Policies allow dynamic and flexible Access Control. Our Access Control mechanisms involve the use of both Credentials (similar to Capabilities) and Policies (extension of Access Control Lists). A “Set-User Credential” service allows Amplification of Rights and the Credentials allow users to delegate authority in a controlled manner to other programs. The next section describes Active Spaces and Gaia, a framework that enables such an Active Space. Section 3 discusses the problems in greater detail. Section 4 discusses Roles, Policies and Credentials around which the Gaia Security Architecture is built. In Sections 5, 6 we discuss Authentication and Access Control. We then discuss Secure Loading of Components, Bootstrapping and “Set-User Credential” service in Sections 7, 8, 9. On going work and future research is discussed in Sections 10 and we conclude in Section 11. 2 Gaia and Active Spaces Gaia supports an Active Space by facilitating the interaction with physical space. It does this by making physical spaces programmable. Active Space is a generic computing system and like any other consists of hardware, software, middleware operating system, application programs and users. The hardware consists of all computing devices in an active space. The operating system in an Active Space provides a uniform view of both logical and physical devices. All the entities in an active space, which comprise the set of users and the application programs can use the power of this generic computing model to enable Active Spaces. Active Spaces and Gaia are discussed further in much greater detail in [2, 3]. Gaia OS is a system software infrastructure that enables Active Spaces. It is a component-based distributed meta-operating system that runs on existing operating system and is implemented on top of existing middleware platforms. Fig 1 show the architecture of Gaia. The Unified Object Bus defines the distributed object model for the OS and allows dynamic creation and reconfiguration of components. Gaia Services are implemented as CORBA objects[9], run in the context of an active space and can be accessed by any entity with the functionality to interact with the unified object bus. These services include Security, QoS, Resource Manager, Environment Service and a Data Object Service. The Gaia Application model is based on on the Model-View-Controller and provides facilities to create and register different components of the application (model, view and controller) and manipulate such components. 3 Security Problems and Issues In this section we discuss the security issues which manifest themselves in an Active Space and consequently in any framework which is a building block for such Active Spaces. Security concerns in Ubiquitous Computing can be classified among the following categories, which are described in greater detail in the later sections. - Authentication. - Access Control. - Secure Boot-Strapping. - Privacy. - Confidentiality. Apart from Authentication and Access Control, Ubiquitous Computing environments have to deal with the confidentiality of information and privacy of users. Privacy concerns are a direct consequence of the system being location and context aware. As these computing environments are built on a huge scale Key Distribution and Scalability are major concerns. The scalability problem also introduces new challenges in the administration of such systems. Thus any framework which supports Ubiquitous Computing has to solve the above problems. As Gaia is a component based framework we also have to provide mechanisms for securely loading such components. The system also has to be bootstrapped in a manner such that the integrity of the system is not compromised. Location information has to be protected and disbursed in a manner such that a user’s privacy is not compromised. Gaia being a distributed system the security mechanisms should also be able to solve the following problems. - **Delegation of authority to trusted program**: A user has to provide a program/service a mechanism by which the program/service can execute on the user's behalf. - **Delegation of authority to untrusted program**: If the user does not trust a program he might not want to give it complete authority to execute on its behalf. For example, Service S might not be trusted and instead of giving it complete authority a user might want to give it only the authority to user resources in `/Active_Space/Room_3234` on his behalf. The Confused Deputy problem also manifests itself. This problem arises when a program/service runs with authority stemming from two sources. It is sometimes necessary in such cases to limit authority stemming from a particular source only to some resources. - **Simple authentication** In some cases a user might just want to authenticate himself to a service. The service doesn't need to execute using the user's authority. In such cases the user should be able to issue a Credential that the service cannot misuse to execute on the user's behalf. 4 Credentials, Policies and Roles To solve the various security problems which manifest themselves in such a system we use **Roles**, **Policies** and **Credentials**. **Policies** are an extension of **Access Control Lists** that allows us to specify a richer and more powerful set of constraints and conditions. Jini uses a similar mechanism with the help of Access Control Lists[11] but is not as powerful as what we provide in Gaia. **Credentials** are similar to **Capabilities** and these are used in combination with Policies to do Access Control in Gaia OS.[7] We provide both Role-Based and Discretionary Access Control in Gaia OS. The Authentication Service in Gaia OS provides different kinds of Credentials to achieve the various objectives. 4.1 Credentials A Credential in Gaia is a certificate which is issued to users/programs. Credentials are similar to capabilities as they control the degree to which authority can be delegated and in Gaia we have three different kinds of Credentials: - **Generic Credential**: These Credentials solve the problem of delegating authority to a trusted program mentioned in Sec 3. A typical Credential looks as shown in Fig 2. These Credentials give the holder of the Credential all privileges associated with the Credential's owner. As a user might be a part of many roles and have many attributes, a list of roles and attributes for which the Credential is valid is also sent. A key that is shared by the Access Control Service and the Authentication Service signs the Credential. These services are described in Sec 5 and Sec 6. The Credential also has a time field that stores the time when the Credential was issued. This field is used to decide upon the expiration of the Credentials. Fig. 2. A Credential - Restricted Credential: When the **DELEGATION RESTRICTED TO** field is present, the Credential is termed a restricted Credential. The restricted Credential solves the problem of delegating authority to an untrusted program mentioned in Sec 3. In this case the holder of the Credential only has privileges to access resources enumerated in the **DELEGATION RESTRICTED TO** field. - Non Delegatable Credential: These are Credentials that can be used by the client to prove its identity to the service without giving any of its own rights to the service. This is achieved by a Non-Delegatable Credential issued by the **Authentication Service**. This Credential has the ID of the **TARGET SERVICE** that the client wants to authenticate to. Thus, the service cannot use this Credential to contact other services pretending to be the client. ### 4.2 Policies Policies are associated with all resources in the system. Policies in Gaia are expressed as boolean expressions in disjunctive normal form. These policies are extremely powerful, as they cannot only be expressed in terms of roles but also attributes. A sample policy for a Directory Object is shown in Fig 3. It is also known that **admin** is higher in the role hierarchy than **student** and that **bogill** is a **student**. Thus, any user with the **student** role or higher can enter the directory. Only the admin can create a file in the directory, and further that file should be named “mail.conf”. The **args** array contains the arguments to the method in question (in this case, **create**). Only the admin can delete files from the directory. Access to any other operation is controlled according to the **default** method policy. One such operation is **list**. Thus, all students can list the directory during office hours, and **bogill** can list files in the directory anytime until the sixth of January 2001 and of-course the admin can list the directory all the time. Policies are also associated with other resources. For example consider a service, which allows users to display video on a High Definition TV. Then the CORBA object implementing this could be associated with a policy as shown in Fig 4. This policy says that the **showVideoOnHDTV** method can only be accessed by **Junior_Admin** roles whose age is greater than 50. Gaia OS allows composition of Policies. We associate a default system policy with every type of component. Users can override this with their own default policy. Further each running instance of a component can be given a distinct policy and this can be changed dynamically at run-time. Gaia OS checks if the per-instance policies are consistent with the user-level or system level policies. In case of inconsistency, the per-instance policy prevails. A Browser allows the user to look at the various instances of objects and components that have been created and allows the policies to be modified with the help of a Policy Editor. The representation of these policies is shown in Fig 5. When a component/object of type HDTV controller is started a default policy is applied unless the user starting the component has specified his own default policy (as is in this case). Further he can specify a different policy for each instance of the component he starts as shown in the figure. 4.3 Roles Roles are represented in the Gaia File System. The hierarchy of roles is represented as a directory hierarchy in the file system. A typical hierarchy is shown in Fig 6. Roles are defined by creating appropriate directories in the resource hierarchy. Roles are administered by enforcing access policies in the role hierarchy representation in the file system. Thus, if an administrator wants to add a role or a user to a role, he should have the permissions to create the corresponding file or ```c enter student ; create admin & args[1] = "mail.conf" ; delete admin ; default student & time > 8:00 & time < 17:00 bagill & date < 01/06/2001 admin ; ``` Fig. 3. A Policy file for a Directory ``` showVideoUnHDTV Junior_Admin & age > 50 ; ``` Fig. 4. Example of a simple policy file for an object **Fig. 5.** Representation of Policies **Fig. 6.** Hierarchy in the file system directory in the role hierarchy. The access policies associated with the files and directories in the role hierarchy determine the policies for role administration in our access control model. The Access Control service examines the policies associated with the files and directories and makes decisions which determine whether roles/users can be added or deleted. The representation of roles in such a manner greatly simplifies the administration of the system. Similarly access control is provided for services in Gaia. 4.4 Credentials and Roles Credentials and Roles are closely tied together. If a user authenticates himself and chooses not to activate any role he gets only the Credential as a user \{user\}_{signedAS}. However, if he authenticates himself and decides to activate Roles A and B, then his Credential also indicates that he has privileges belonging to Roles A and B \{user,Role A, RoleB\}_{signedAS}. When a program gets a Credential it executes with the full rights of the user, roles that the Credential identifies unless it is a Restricted Credential. Also at any time it should be possible for a user to change his Credential so that he can deactivate existing roles or activate additional roles. When a user starts a program and wants to give the program his Credential he should be able to specify that the Credential be given only for specific roles. 5 Authentication Authentication consists of validating a user’s claim regarding his identity. The Authentication Service (AS) provides this functionality. The authentication service provides different kinds of Credentials (see Sec 4.1) to solve the problems discussed in Sec 3. These Credentials enable the Access Control Service to provide Discretionary, Mandatory and Role Based Access Control. The authentication service issues a Credential when a user proves his identity. A user could prove his identity in many different ways. He could use a traditional login/password mechanism. He might also use a smart card that proves his identity. Other devices such as I-buttons[16] can be used to store the private key of the user and a challenge/response method could be used to prove his identity. The use of private and public keys ensures that the system scales as required. Further a user can add and delete roles in his Credential using the services of the Authentication Service. He can also change the type of his Credential using the API provided by the Authentication Service. 6 Access Control Access Control in GAIA is discussed in details in [1]. Here we provide a very short description of the same. GAIA provides an extensive mechanism to enforce different kinds of access control. It provides: 1. Role-Based Access Control[4][5][6]. 2. Discretionary Access Control. Role-Based Access Control is applied to all the System Files and Services. ARBAC[4] also ensures that administration of the system is easy and scalable. Discretionary Access Control is provided to users so that they can secure their private files. Mandatory Access Control provides military grade security in the system and is essential if the system is deployed in a Military environment. We need a combination of all three, as a single access control mechanism is not powerful and expressive enough to satisfy all access control requirements of such a system. The Access Control Service on obtaining the name of the resource can fetch the corresponding policy file and decide whether or not access is allowed. The two methods available in the interface for the Access Control Service are canAccessGivenPolicy and canAccessGivenPath both of which take a Credential as input. They also take Arguments as a parameter as some policy might be specified in terms of the arguments. For example, when a user wants to access a file, he uses the File Service. The File Service contacts the Access Control Service with the path for the file and the Credentials of the user. The Access Control Service then tells the File Service whether the user is authorized or not to access the file. The Credential is generated by the Authentication Service and is encrypted. The Authentication Service and Access Control Service share the same key. The Access Control Service makes its decisions based on the policy file for the resource/object/service. These policy files can be edited by the owner of the resource to give permission to others users. There are also meta-data files which determine who has access to the policy files. 6.1 Security Interceptors We use interceptors[8] to add additional security contexts to the requests that are propagated from the client to the server. These interceptors insert the Credential of the caller and this mechanism is transparent to the application. Interceptors also give access to the method name and the parameters that are being passed to access the method. Fig 7 shows this mechanism in the case of a File Server. The interceptor is responsible for contacting the Access Control service. In case access is denied it raises an exception. Otherwise it lets the normal path of invocation continue as if nothing had happened. In Gaia OS we have interceptors which work with TAO[14] and ORBACUS[15]. These are request-level Portable Interceptors defined as part of the CORBA specification[10]. 7 Secure Dynamic Loading of Components As Gaia OS is a component based framework; we have to ensure that there are appropriate mechanisms for securely loading such components. There are several aspects to this problem, which are discussed in the following subsections. 7.1 Component Repository as part of the file system The component repository can be stored on the file system. This simplifies its security management and further ensures that the generic security mechanism for the system can be used for this too. 7.2 Access Control for Component Repository There are two sides to access control for the component repository. First of all, a policy dictates who can place components into the component repository. For example, if components are stored in a directory called /ComponentRepository the policy for the directory can specify who can add their components to the Component Repository. A user can be given permissions to add components to the Component Repository by giving him write permissions for the directory. Secondly, when a user adds a component to the Component Repository, being the owner of the component(file) he can create a policy for it. Using this policy he can limit access to the component to certain users or roles, or users satisfying certain attributes. 7.3 Access Control for Components It is sometimes necessary to allow different users to instantiate components inside a single process space (component container). ¹ This is done by associating security policies with each component. A Secure Component Manager manages the secure dynamic loading of such components and with the help of interceptors controls access to such components. Each component is started with a policy file. The creator of the component specifies access to certain users or roles using this policy file. This provides method level access control to objects as discussed in Sec 4.2. Credentials in a Component Container This section explains how Credentials are associated with components and provides some insight into the implementation. With every component in the ComponentContainer we associate two Credentials: “Self-Credential” and “Client-Credential” (See Fig 8). Consider a component container CC. If user A creates and loads a component inside the component container user A’s Credential is the component’s “Self-Credential”. Now a client C may make request invocations on this component. ¹ A Component Container is a place holder for components in which components are instantiated and follow their life cycle. In such a scenario C’s Credentials are assigned to the “Client-Credential” of the component. Thus the component possesses both the creator’s and the client’s Credentials and can use them appropriately. For example, when the component wants to perform certain operations on behalf of the client it shall use “Client-Credential” and when it wants to use its own Credential it shall use “Self-Credential”. On creation the “Self-Credential” of the component is set to that of its creator. When a component behaves as a client the interceptor on the client side inserts the Credential into the request. The interceptor on the server side transparently extracts the Credential and “Client-Credential” of the server side component is assigned this value. In case of more than one client making requests on the component concurrently a mapping of Client threads and Credentials is maintained. By default the client side interceptor inserts the “Self-Credential” into every request. If the client wishes to use any other Credential other than the “Self-Credential” it sets the “Self-Credential” appropriately. The Secure Component Manager’s API has methods that can be called to obtain either the “Self-Credential” or the “Client-Credential” corresponding to a particular client. Its API also has methods by which the value of “Self-Credential” can be set. 8 Secure Bootstrapping The bootstrapping is performed across two levels. The first boot level consists of bootstrapping the Security Services, Naming Service and other system services like the Trader Service (top level). These domain services constitute the Trusted Computing Base for GAIA. At the second level, an active space is bootstrapped. To ensure the integrity of the boot process, the Naming Service is “secured” at all times. By this we mean that Active Spaces can register in the appropriate context only upon proving their identity. For this purpose, we use Public Key Crypto System. The Authentication Service has the public key of all the Active Spaces. When the Active Space boots and wants to register itself in the naming service it proves its identity by encrypting a challenge string with its private key. Consider for example an “Active Room” being switched on. Any Active Space in Gaia OS has its own event service and other such services. These have to be registered in the Naming Service. The Active Space, thus has to prove its identity to the Naming Service to register in the appropriate context and at the same time verify the integrity and authenticity of the Naming Service (which was booted in the first level). All this forms a part of the secure bootstrapping process. 9 “SetUser Credential” Service In Gaia we require a mechanism by which a process running with lower privileges can create an activity with higher privileges. Thus, there can be an executable that runs with a specific Credential irrespective of the user who invokes it. This feature is necessary to facilitate bootstrapping so that while any user can trigger bootstrapping the Active Space runs with its own Credential. For example, the earliest employee to work might be the one to “switch-on” an “Active Supermarket” but the “Active Supermarket” runs with its own Credential rather than that of the employee who started it. Further, in a system that supports persistent objects we should be able to resurrect persistent objects with their original permissions. These problems require a “Set-User-Credential” mechanism, similar to the “Set-User-ID” in Unix, using which we can specify the Credential that the executable will acquire once it is invoked. The “Set-User-Credential” service is used by owners of resources to specify the Credentials with which the components they own will execute, irrespective of the user who loads them. Further, this service will also be used by users to load components for which the Credential has been set. This service loads the component in a Component Container with the set Credential. The “Set-User-Credential” service looks at system policies that dictate what components can be started with promoted Credentials upon. This interaction is shown in Fig 9. ![Diagram of SetUserID service](image) **Fig. 9.** SetUserID service 10 On Going Work and Future Research Active Spaces are characterized by applications which are “location-aware”. We are building a Location Management System which tracks and detects users in Gaia. It is important that a user’s privacy requirements are taken care of in such a system. We plan to integrate two different mechanisms into our Location Management System that will provide users privacy ranging from absolute to limited. In the absolute case the system learns a user location only when the user desires so and in the other case the system is always aware of the user’s location but exercises discretion in distributing this sensitive information to others. 10.1 System-driven location tracking This is an approach where all location information is stored within the system. The system is responsible for securing this data. The perils associated with this approach is that if the system are is compromised, it compromises the privacy of all users. This also means that the user trusts the system with this information and there is no guarantee that another powerful user of the system such as an administrator cannot gain access to such information by changing the policies. A user defines policies which govern the manner in which his location information is distributed to other users/applications. The system can employ devices which include RF-Badges[13] and I-buttons[16] or a traditional login mechanism to detect a user’s location. 10.2 User-driven location tracking A second approach would be take the full support of wearable computing and store all such data on the user’s wearable. Beacons would broadcast their location and the sensors on the user’s wearable/handheld would know their location. The user can then tell the system about his location when he needs to use “location-aware” services. Even while specifying his location, he can specify it in different granularities depending on how much information he wants to divulge. Such systems are typically more expensive to build though they provide a higher degree of privacy. This is discussed in more details in [12]. 11 Conclusion Security is one of the most important aspects in a ubiquitous computing environment and the Gaia architecture provides the mechanisms to build powerful security features into such a system. We show how we can have different kinds of Access Control and Authentication mechanisms with the help of Roles, Credentials and Policies. The simplicity of our system and its dynamism help us extend traditional operating system security into this new computing environment. References 15. ORBACUS. http://www.ooc.com
{"Source-Url": "http://gaia.cs.illinois.edu/papers/llncs.pdf", "len_cl100k_base": 5628, "olmocr-version": "0.1.49", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 71917, "total-output-tokens": 6870, "length": "2e12", "weborganizer": {"__label__adult": 0.00040984153747558594, "__label__art_design": 0.0009388923645019532, "__label__crime_law": 0.0010080337524414062, "__label__education_jobs": 0.0006747245788574219, "__label__entertainment": 0.00011456012725830078, "__label__fashion_beauty": 0.00020301342010498047, "__label__finance_business": 0.0005054473876953125, "__label__food_dining": 0.0003314018249511719, "__label__games": 0.0005788803100585938, "__label__hardware": 0.002651214599609375, "__label__health": 0.0006613731384277344, "__label__history": 0.000354766845703125, "__label__home_hobbies": 0.00014412403106689453, "__label__industrial": 0.000698089599609375, "__label__literature": 0.0003190040588378906, "__label__politics": 0.0004267692565917969, "__label__religion": 0.0005145072937011719, "__label__science_tech": 0.2734375, "__label__social_life": 0.00013363361358642578, "__label__software": 0.03570556640625, "__label__software_dev": 0.67919921875, "__label__sports_fitness": 0.00025916099548339844, "__label__transportation": 0.0005941390991210938, "__label__travel": 0.0002142190933227539}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30871, 0.02465]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30871, 0.42975]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30871, 0.92602]], "google_gemma-3-12b-it_contains_pii": [[0, 2649, false], [2649, 5691, null], [5691, 7129, null], [7129, 9954, null], [9954, 12444, null], [12444, 14110, null], [14110, 14191, null], [14191, 16660, null], [16660, 19296, null], [19296, 20804, null], [20804, 22051, null], [22051, 24981, null], [24981, 26548, null], [26548, 29022, null], [29022, 30871, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2649, true], [2649, 5691, null], [5691, 7129, null], [7129, 9954, null], [9954, 12444, null], [12444, 14110, null], [14110, 14191, null], [14191, 16660, null], [16660, 19296, null], [19296, 20804, null], [20804, 22051, null], [22051, 24981, null], [24981, 26548, null], [26548, 29022, null], [29022, 30871, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30871, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30871, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30871, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30871, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30871, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30871, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30871, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30871, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30871, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30871, null]], "pdf_page_numbers": [[0, 2649, 1], [2649, 5691, 2], [5691, 7129, 3], [7129, 9954, 4], [9954, 12444, 5], [12444, 14110, 6], [14110, 14191, 7], [14191, 16660, 8], [16660, 19296, 9], [19296, 20804, 10], [20804, 22051, 11], [22051, 24981, 12], [24981, 26548, 13], [26548, 29022, 14], [29022, 30871, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30871, 0.0]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
83275f56f57c39e055311b2fc2c31bacc484d61e
The *rtracklayer* package Michael Lawrence June 18, 2017 Contents 1 Introduction 2 2 Gene expression and microRNA target sites 2 2.1 Creating a target site track . . . . . . . . . . . . . . . . . . . . . . . 2 2.1.1 Constructing the *GRanges* . . . . . . . . . . . . . . . . . . . . . . . . 2 2.1.2 Accessing track information . . . . . . . . . . . . . . . . . . . . . . . . 4 2.1.3 Subsetting a *GRanges* . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.1.4 Exporting and importing tracks . . . . . . . . . . . . . . . . . . . . . . . . 6 2.2 Viewing the targets in a genome browser . . . . . . . . . . . . . . . . . . 6 2.2.1 Starting a session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.2.2 Laying the track . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2.3 Viewing the track . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2.4 A shortcut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.2.5 Downloading Tracks from your Web Browser . . . . . . . . . . . . . . . . 8 2.2.6 Accessing view state . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 3 CPNE1 expression and HapMap SNPs 9 3.1 Loading and manipulating the track . . . . . . . . . . . . . . . . . . . . . . . 10 3.2 Browsing the SNPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 3.2.1 Laying a WIG track . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 3.2.2 Plotting the SNP track . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 4 Binding sites for NRSF 11 4.1 Creating the binding site track . . . . . . . . . . . . . . . . . . . . . . . . . 12 4.2 Browsing the binding sites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 5 Downloading tracks from UCSC 12 5.1 Example 1: the RepeatMasker Track . . . . . . . . . . . . . . . . . . . . . . 13 5.2 Example 2: DNaseI hypersensitivity regions in the K562 Cell Line . . . . . . 13 5.3 Discovering Which Tracks and Tables are Available from UCSC . . . . . . . . . 14 6 Conclusion 14 1 Introduction The rtracklayer package is an interface (or layer) between R and genome browsers. Its main purpose is the visualization of genomic annotation tracks, whether generated through experimental data analysis performed in R or loaded from an external data source. The features of rtracklayer may be divided into two categories: 1) the import/export of track data and 2) the control and querying of external genome browser sessions and views. The basic track data structure in Bioconductor is the GRanges class, defined in the GenomicRanges package. rtracklayer supports the import and export of tracks from and to files in various formats, see Section 2.1.4. All positions in a GRanges should be 1-based, as in R itself. The rtracklayer package currently interfaces with the UCSC web-based genome browser. Other packages may provide drivers for other genome browsers through a plugin system. With rtracklayer, the user may start a genome browser session, create and manipulate genomic views, and import/export tracks and sequences to and from a browser. Please note that not all features are necessarily supported by every browser interface. The rest of this vignette will consist of a number of case studies. First, we consider an experiment investigating microRNA regulation of gene expression, where the microRNA target sites are the primary genomic features of interest. 2 Gene expression and microRNA target sites This section will demonstrate the features of rtracklayer on a microarray dataset from a larger experiment investigating the regulation of human stem cell differentiation by microRNAs. The transcriptome of the cells was measured before and after differentiation by HG-U133plus2 Affymetrix GeneChip arrays. We begin our demonstration by constructing an annotation dataset from the experimental data, and then illustrate the use of the genome browser interface to display interesting genomic regions in the UCSC browser. 2.1 Creating a target site track For the analysis of the stem cell microarray data, we are interested in the genomic regions corresponding to differentially expressed genes that are known to be targeted by a microRNA. We will represent this information as an annotation track, so that we may view it in the UCSC genome browser. 2.1.1 Constructing the GRanges In preparation for creating the microRNA target track, we first used limma to detect the differentially expressed genes in the microarray experiment. The locations of the microRNA target sites were obtained from MiRBase. The code below stores information about the target sites on differentially expressed genes in the data.frame called targets, which can also be obtained by entering data(targets) when rtracklayer is loaded. ```r > library("humanStemCell") > data(fhesc) > library("genefilter") > filtFhesc <- nsFilter(fhesc)[[1]] > library("limma") > design <- model.matrix(~filtFhesc$Diff) > hesclim <- lmFit(filtFhesc, design) > hesceb <- eBayes(hesclim) > tab <- topTable(hesceb, design) > design <- model.matrix(~filtFhesc$Diff) > hesclim <- lmFit(filtFhesc, design) > hesceb <- eBayes(hesclim) > tab <- topTable(hesceb, design) > tab2 <- tab[(tab$logFC > 1) & (tab$adj.P.Val < 0.01),] > affyIDs <- rownames(tab2) > library("microRNA") > data(hsTargets) > library("hgu133plus2.db") > entrezIDs <- mappedRkeys(hgu133plus2ENTREZID[affyIDs]) > library("org.Hs.eg.db") > mappedEntrezIDs <- entrezIDs %in% mappedkeys(org.Hs.egENSEMBLTRANS) > ensemblIDs <- mappedRkeys(org.Hs.egENSEMBLTRANS[mappedEntrezIDs]) > targetMatches <- match(ensemblIDs, hsTargets$target, 0) > ## same as data(targets) > targets <- hsTargets[targetMatches,] > targets$chrom <- paste("chr", targets$chrom, sep = "") ``` The following code creates the track from the targets dataset: ```r > library(rtracklayer) > library(GenomicRanges) > ## call data(targets) if skipping first block > head(targets) name target chrom start end strand 334437 hsa-miR-10a* ENST00000305798 chr4 99612455 99612476 - 493509 hsa-miR-519e* ENST00000369516 chr1 115392578 115392598 - 475630 hsa-miR-376a* ENST00000372003 chr1 46423863 46423887 + 250959 hsa-miR-215 ENST00000339728 chr2 235068571 235068591 - 250964 hsa-miR-621 ENST00000390645 chr2 235068710 235068729 - 200348 hsa-miR-129* ENST00000221847 chr19 4188086 4188094 + ``` ```r > targetRanges <- IRanges(targets$start, targets$end) > targetTrack <- with(targets, + GRangesForUCSCGenre("hg18", chrom, targetRanges, strand, + name, target)) ``` The GRangesForUCSCGenre function constructs a GRanges object for the named genome. The strand information, the name of the microRNA and the Ensembl ID of the targeted transcript are stored in the `GRanges`. The chromosome for each site is passed as the `chrom` argument. The chromosome names and lengths for the genome are taken from the UCSC database and stored in the `GRanges` along with the genome identifier. We can retrieve them as follows: ```r > genome(targetTrack) ``` <table> <thead> <tr> <th>chr1</th> <th>chr2</th> <th>chr3</th> <th>chr4</th> <th>chr5</th> </tr> </thead> <tbody> <tr> <td>chr1</td> <td>chr2</td> <td>chr3</td> <td>chr4</td> <td>chr5</td> </tr> <tr> <td>hgl8</td> <td>hgl8</td> <td>hgl8</td> <td>hgl8</td> <td>hgl8</td> </tr> <tr> <td>chr6</td> <td>chr7</td> <td>chr8</td> <td>chr9</td> <td>chr10</td> </tr> <tr> <td>hgl8</td> <td>hgl8</td> <td>hgl8</td> <td>hgl8</td> <td>hgl8</td> </tr> <tr> <td>chr11</td> <td>chr12</td> <td>chr13</td> <td>chr14</td> <td>chr15</td> </tr> <tr> <td>hgl8</td> <td>hgl8</td> <td>hgl8</td> <td>hgl8</td> <td>hgl8</td> </tr> <tr> <td>chr16</td> <td>chr17</td> <td>chr18</td> <td>chr19</td> <td>chr20</td> </tr> <tr> <td>hgl8</td> <td>hgl8</td> <td>hgl8</td> <td>hgl8</td> <td>hgl8</td> </tr> <tr> <td>chr21</td> <td>chr22</td> <td>chrX</td> <td>chrY</td> <td>chrM</td> </tr> <tr> <td>hgl8</td> <td>hgl8</td> <td>hgl8</td> <td>hgl8</td> <td>hgl8</td> </tr> <tr> <td>chr1_random</td> <td>chr2_random</td> <td>chr3_random</td> <td>chr4_random</td> <td>chr5_h2_hap1</td> </tr> <tr> <td>hgl8</td> <td>hgl8</td> <td>hgl8</td> <td>hgl8</td> <td>hgl8</td> </tr> <tr> <td>chr5_random</td> <td>chr6_cox_hap1</td> <td>chr6_qbl_hap2</td> <td>chr6_random</td> <td>chr7_random</td> </tr> <tr> <td>hgl8</td> <td>hgl8</td> <td>hgl8</td> <td>hgl8</td> <td>hgl8</td> </tr> <tr> <td>chr8_random</td> <td>chr9_random</td> <td>chr10_random</td> <td>chr11_random</td> <td>chr13_random</td> </tr> <tr> <td>hgl8</td> <td>hgl8</td> <td>hgl8</td> <td>hgl8</td> <td>hgl8</td> </tr> <tr> <td>chr15_random</td> <td>chr16_random</td> <td>chr17_random</td> <td>chr18_random</td> <td>chr19_random</td> </tr> <tr> <td>hgl8</td> <td>hgl8</td> <td>hgl8</td> <td>hgl8</td> <td>hgl8</td> </tr> <tr> <td>chr21_random</td> <td>chr22_h2_hap1</td> <td>chr22_random</td> <td>chrX_random</td> <td></td> </tr> <tr> <td>hgl8</td> <td>hgl8</td> <td>hgl8</td> <td>hgl8</td> <td>hgl8</td> </tr> </tbody> </table> ```r > head(seqlengths(targetTrack)) ``` <table> <thead> <tr> <th>chr1</th> <th>chr2</th> <th>chr3</th> <th>chr4</th> <th>chr5</th> </tr> </thead> <tbody> <tr> <td>chr1</td> <td>chr2</td> <td>chr3</td> <td>chr4</td> <td>chr5</td> </tr> <tr> <td>247249719</td> <td>242951149</td> <td>199501827</td> <td>191273063</td> <td>180857866</td> </tr> </tbody> </table> While this extra information is not strictly needed to upload data to UCSC, calling `GRangesForUCSCGenome` is an easy way to formally associate interval data to a UCSC genome build. This ensures, for example, that the data will always be uploaded to the correct genome, regardless of browser state. It also immediately validates whether the intervals fall within the bounds of the genome. For cases where one is not interacting with the UCSC genome browser, and in particular when network access is unavailable, the `GRangesForBSGenome` function behaves the same, except it finds an installed `BSGenome` package and loads it to retrieve the chromosome information. ### 2.1.2 Accessing track information The track information is now stored in the R session as a `GRanges` object. It holds the chromosome, start, end and strand for each feature, along with any number of data columns. The primary feature attributes are the `start`, `end`, `seqnames` and `strand`. There are accessors for each of these, named accordingly. For example, the following code retrieves the chromosome names and then start positions for each feature in the track. ```r > head(seqnames(targetTrack)) factor-Rle of length 6 with 4 runs Lengths: 1 2 2 1 Values : chr4 chr1 chr2 chr19 Levels(49): chr1 chr2 chr3 ... chr22_h2_hap1 chr22_random chrX_random ``` ```r > head(start(targetTrack)) [1] 99612455 115392578 46423863 235068571 235068710 4188086 ``` **Exercises** 1. Get the strand of each feature in the track 2. Calculate the length of each feature 3. Reconstruct (partially) the `targets` data.frame ### 2.1.3 Subsetting a `GRanges` It is often helpful to extract subsets from `GRanges` instances, especially when uploading to a genome browser. The data can be subset though a matrix-style syntax by feature and column. The conventional `[` method is employed for subsetting, where the first parameter, `i`, indexes the features and `j` indexes the data columns. Both `i` and `j` may contain numeric, logical and character indices, which behave as expected. ```r > ## get the first 10 targets > first10 <- targetTrack[1:10] > ## get pos strand targets > posTargets <- targetTrack[strand(targetTrack) == "+"] > ## get the targets on chr1 > chr1Targets <- targetTrack[seqnames(targetTrack) == "chr1"] ``` **Exercises** 1. Subset the track for all features on the negative strand of chromosome 2. 2.1.4 Exporting and importing tracks Import and export of GRanges instances is supported in the following formats: Browser Extended Display (BED), versions 1, 2 and 3 of the General Feature Format (GFF), and Wiggle (WIG). Support for additional formats may be provided by other packages through a plugin system. To save the microRNA target track created above in a format understood by other tools, we could export it as BED. This is done with the export function, which accepts a filename or any R connection object as its target. If a target is not given, the serialized string is returned. The desired format is derived, by default, from the extension of the filename. Use the format parameter to explicitly specify a format. > export(targetTrack, "targets.bed") To read the data back in a future session, we could use the import function. The source of the data may be given as a connection, a filename or a character vector containing the data. Like the export function, the format is determined from the filename, by default. > restoredTrack <- import("targets.bed") The restoredTrack object is of class GRanges. Exercises 1. Output the track to a file in the “gff” format. 2. Read the track back into R. 3. Export the track as a character vector. 2.2 Viewing the targets in a genome browser For the next step in our example, we will load the track into a genome browser for visualization with other genomic annotations. The rtracklayer package is capable of interfacing with any genome browser for which a driver exists. In this case, we will interact with the web-based UCSC browser, but the same code should work for any browser. 2.2.1 Starting a session The first step towards interfacing with a browser is to start a browser session, represented in R as a BrowserSession object. A BrowserSession is primarily a container of tracks and genomic views. The following code creates a BrowserSession for the UCSC browser: > session <- browserSession("UCSC") Note that the name of any other supported browser could have been given here instead of “UCSC”. To see the names of supported browsers, enter: ```r > genomeBrowsers() [1] "UCSC" ``` ### 2.2.2 Laying the track Before a track can be viewed on the genome, it must be loaded into the session using the `track<-` function, as demonstrated below: ```r > track(session, "targets") <- targetTrack ``` The `name` argument should be a character vector that will help identify the track within `session`. Note that the invocation of `track<-` above does not specify an upload format. Thus, the default, “auto”, is used. Since the track does not contain any data values, the track is uploaded as BED. To make this explicit, we could pass “bed” as the `format` parameter. #### Exercises 1. Lay a track with the first 100 features of `targetTrack` Here we use the short-cut `$` syntax for storing the track. ### 2.2.3 Viewing the track For UCSC, a view roughly corresponds to one tab or window in the web browser. The target sites are distributed throughout the genome, so we will only be able to view a few features at a time. In this case, we will view only the first feature in the track. A convenient way to focus a view on a particular set of features is to subset the track and pass the range of the subtrack to the constructor of the view. Below we take a track subset that contains only the first feature. ```r > subTargetTrack <- targetTrack[1] # get first feature ``` Now we call the `browserView` function to construct the view and pass the subtrack, zoomed out by a factor of 10, as the segment to view. By passing the name of the targets track in the `pack` parameter, we instruct the browser to use the “pack” mode for viewing the track. This results in the name of the microRNA appearing next to the target site glyph. ```r > view <- browserView(session, subTargetTrack * -10, pack = "targets") ``` If multiple ranges are provided, multiple views are launched: ```r > view <- browserView(session, targetTrack[1:5] * -10, pack = "targets") ``` Exercises 1. Create a new view with the same region as view, except zoomed out 2X. 2. Create a view with the “targets” track displayed in “full” mode, instead of “packed”. 2.2.4 A shortcut There is also a shortcut to the above steps. The `browseGenome` function creates a session for a specified browser, loads one or more tracks into the session and creates a view of a given genome segment. In the following code, we create a new UCSC session, load the track and view the first two features, all in one call: ```r > browseGenome(targetTrack, range = subTargetTrack * -10) ``` It is even simpler to view the subtrack in UCSC by relying on parameter defaults: ```r > browseGenome(subTargetTrack) ``` 2.2.5 Downloading Tracks from your Web Browser It is possible to query the browser to obtain the names of the loaded tracks and to download the tracks into R. To list the tracks loaded in the browser, enter the following: ```r > loaded_tracks <- trackNames(session) ``` One may download any of the tracks, such as the “targets” track that was loaded previously in this example. ```r > subTargetTrack <- track(session, "targets") ``` The returned object is a `GRanges`, even if the data was originally uploaded as another object. By default, the segment of the track downloaded is the current default genome segment associated with the session. One may download track data for any genome segment, such as those on a particular chromosome. Note that this does not distinguish by strand; we are only indicating a position on the genome. ```r > chr1Targets <- track(session, "targets", chr1Targets) ``` Exercises 1. Get the SNP under the first target, displayed in view. 2. Get the UCSC gene for the same target. 2.2.6 Accessing view state The view variable is an instance of BrowserView, which provides an interface for getting and setting view attributes. Note that for the UCSC browser, changing the view state opens a new view, as a new page must be opened in the web browser. To programmatically query the segment displayed by a view, use the range method for a BrowserView. ```r > segment <- range(view) ``` Similarly, one may get and set the names of the visible tracks in the view. ```r > visible_tracks <- trackNames(view) > trackNames(view) <- visible_tracks ``` The visibility mode (hide, dense, pack, squish, full) of the tracks may be retrieved with the ucscTrackModes method. ```r > modes <- ucscTrackModes(view) ``` The returned value, modes, is of class UCSCTrackModes. The modes may be accessed using the [ function. Here, we set the mode of our “targets” track to “full” visibility. ```r > modes["targets"] > modes["targets"] <- "full" > ucscTrackModes(view) <- modes ``` Existing browser views for a session may be retrieved by calling the browserViews method on the browserSession instance. ```r > views <- browserViews(session) > length(views) ``` Exercises 1. Retrieve target currently visible in the view. 2. Limit the view to display only the SNP, UCSC gene and target track. 3. Hide the UCSC gene track. 3 CPNE1 expression and HapMap SNPs Included with the rtracklayer package is a track object (created by the GGtools package) with features from a subset of the SNPs on chromosome 20 from 60 HapMap founders in the CEU cohort. Each SNP has an associated data value indicating its association with the expression of the CPNE1 gene according to a Cochran-Armitage 1df test. The top 5000 scoring SNPs were selected for the track. We load the track presently. 3.1 Loading and manipulating the track The data values for a track are stored in the metadata columns of the `GRanges` instance. Often, a track contains a single column of numeric values, conventionally known as the `score`. The `score` function retrieves the metadata column named `score` or, if one does not exist, the first metadata column in the `GRanges`, as long as it is numeric. Otherwise, `NULL` is returned. ```r > head(score(cpneTrack)) rs4814683 rs6076506 rs6139074 rs1418258 rs7274499 rs6116610 0.16261691 0.02170423 0.47098379 0.16261691 0.05944578 0.18101862 ``` One use of extracting the data values is to plot the data. ```r > plot(start(cpneTrack), score(cpneTrack)) ``` 3.2 Browsing the SNPs We now aim to view some of the SNPs in the UCSC browser. Unlike the microRNA target site example above, this track has quantitative information, which requires special consideration for visualization. 3.2.1 Laying a WIG track To view the SNP locations as a track in a genome browser, we first need to upload the track to a fresh session. In the code below, we use the ```$``` alias of track.< ```r > session <- browserSession() > session$cpne <- cpneTrack ``` Note that because `cpneTrack` contains data values and its features do not overlap, it is uploaded to the browser in the WIG format. One limitation of the WIG format is that it is not possible to encode strand information. Thus, each strand needs to have its own track, and `rtracklayer` does this automatically, unless only one strand is represented in the track (as in this case). One could pass “bed” to the `format` parameter of track< to prevent the split, but tracks uploaded as BED are much more limited compared to WIG tracks in terms of visualization options. To form the labels for the WIG subtracks, “p” is concatenated onto the plus track and “m” onto the minus track. Features with missing track information are placed in a track named with the “na” postfix. It is important to note that the subtracks must be identified individually when, for example, downloading the track or changing track visibility. 3.2.2 Plotting the SNP track To plot the data values for the SNP's in a track, we need to create a `browserView`. We will view the region spanning the first 5 SNPs in the track, which will be displayed in the “full” mode. ```r > view <- browserView(session, range(cpneTrack[1:5,]), full = "cpne") ``` The UCSC browser will plot the data values as bars. There are several options available for tweaking the plot, as described in the help for the `GraphTrackLine` class. These need to be specified laying the track, so we will lay a new track named “cpne2”. First, we will turn the `autoScale` option off, so that the bars will be scaled globally, rather than locally to the current view. Then we could turn on the `yLineOnOff` option to add horizontal line that could represent some sort of cut-off. The position of the line is specified by `yLineMark`. We set it arbitrarily to the 25% quantile. ```r > track(session, "cpne2", autoScale = FALSE, yLineOnOff = TRUE, + yLineMark = quantile(score(cpneTrack), .25)) <- cpneTrack > view <- browserView(session, range(cpneTrack[1:5,]), full = "cpne2") ``` 4 Binding sites for NRSF Another common type of genomic feature is transcription factor binding sites. Here we will use the `Biostrings` package to search for matches to the binding motif for NRSF, convert the result to a track, and display a portion of it in the UCSC browser. 4.1 Creating the binding site track We will use the Biostrings package to search human chromosome 1 for NRSF binding sites. The binding sequence motif is assumed to be TCAGCACCATG-GACAG, though in reality it is more variable. To perform the search, we run matchPattern on the positive strand of chromosome 1. ```r > library(BSgenome.Hsapiens.UCSC.hg19) > nrsfHits <- matchPattern("TCAGCACCATGGACAG", Hsapiens["chr1"]) > length(nrsfHits) # number of hits [1] 2 ``` We then convert the hits, stored as a Views object, to a GRanges instance. ```r > nrsfTrack <- GenomicData(ranges(nrsfHits), strand="+", chrom="chr1", + genome = "hg19") ``` GenomicData is a convenience function that constructs a GRanges object. 4.2 Browsing the binding sites Now that the NRSF binding sites are stored as a track, we can upload them to the UCSC browser and view them. Below, load the track and we view the region around the first hit in a single call to browseGenome. ```r > session <- browseGenome(nrsfTrack, range = range(nrsfTrack[1]) * -10) ``` We observe significant conservation across mammal species in the region of the motif. 5 Downloading tracks from UCSC rtracklayer can be used to download annotation tracks from the UCSC table browser, thus providing a convenient programmatic alternative to the web interface available at http://genome.ucsc.edu/cgi-bin/hgTables. Note that not all tables are output in parseable form, and that UCSC will truncate responses if they exceed certain limits (usually around 100,000 records). The safest (and most efficient) bet for large queries is to download the file via FTP and query it locally. 5.1 Example 1: the RepeatMasker Track This simple example identifies repeat-masked regions in and around the transcription start site (TSS) of the human E2F3 gene, in hg19: ```r > library(rtracklayer) > mySession = browserSession("UCSC") > genome(mySession) <- "hg19" > e2f3.tss.grange <- GRanges("chr6", IRanges(20400587, 20403336)) > tbl.rmsk <- getTable( + ucscTableQuery(mySession, track="rmsk", + range=e2f3.tss.grange, table="rmsk")) ``` There are several important points to understand about this example: 1. The `ucscTableQuery` used above is a proxy for, and provides communication with, the remote UCSC table browser (see [http://genome.ucsc.edu/cgi-bin/hgTables](http://genome.ucsc.edu/cgi-bin/hgTables)). 2. You must know the name of the track and table (or sub-track) that you want. The way to do this is explained in detail below, in section 5.3. 3. If the track contains multiple tables (which is the case for many ENCODE tracks, for instance), then you must also specify that table name. 4. When the track contains a single table only, you may omit the `table` parameter, or reuse the track name (as we did above). 5. If you omit the range parameter, the full track table is returned, covering the entire genome. 6. The amount of time required to download a track is roughly a function of the number of features in the track, which is in turn a function of the density of those features, and the length of the genomic range you request. To download the entire RepeatMasker track, for all of h19, would take a very long time, and is a task poorly suited to rtracklayer. By contrast, one full-genome DNaseI track takes less than a minute (see below). 5.2 Example 2: DNaseI hypersensitivity regions in the K562 Cell Line The ENCODE project ([http://encodeproject.org/ENCODE](http://encodeproject.org/ENCODE)) provides many hundreds of annotation tracks to the UCSC table browser. One of these describes DNaseI hypersensitivity for K562 cells (an immortalized erythroleukemia line) measured at the University of Washington using 'Digital Genome Footprinting' (see [http://www.ncbi.nlm.nih.gov/pubmed?term=19305407](http://www.ncbi.nlm.nih.gov/pubmed?term=19305407)). Obtain DNaseI hypersensitive regions near the E2F3 TSS, and for all of hg19: ```r > e2f3.tss.grange <- GRanges("chr6", IRanges(20400587, 20403336)) > tbl.dnase <- getTable( + ucscTableQuery(mySession, track="dnase", + range=e2f3.tss.grange, table="dnase")) ``` 5.3 Discovering Which Tracks and Tables are Available from UCSC As the examples above demonstrate, you must know the exact UCSC-style name for the track and table you wish to download. You may browse these interactively at http://genome.ucsc.edu/cgi-bin/hgTables?org=Human&db=hg19 or programmatically, as we demonstrate here. ```r > mySession <- browseSession() > genome(mySession) <- "hg19" > # 177 tracks in October 2012 > track.names <- trackNames(ucscTableQuery(mySession)) > # chose a few tracks at random from this set, and discover how > # many tables they hold > tracks <- track.names[c(99, 81, 150, 96, 90)] > sapply(tracks, function(track) { + length(tableNames(ucscTableQuery(mySession, track=track))) + }) ``` 6 Conclusion These case studies have demonstrated a few of the most important features of rtracklayer. Please see the package documentation for more details. The following is the session info that generated this vignette: ```r > sessionInfo() R version 3.4.0 (2017-04-21) Platform: x86_64-pc-linux-gnu (64-bit) Running under: Ubuntu 16.04.2 LTS Matrix products: default BLAS: /home/biocbuild/bbs-3.6-bioc/R/lib/libRblas.so LAPACK: /home/biocbuild/bbs-3.6-bioc/R/lib/libRlapack.so locale: [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C ```
{"Source-Url": "http://bioconductor.org/packages/devel/bioc/vignettes/rtracklayer/inst/doc/rtracklayer.pdf", "len_cl100k_base": 8037, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 32010, "total-output-tokens": 8782, "length": "2e12", "weborganizer": {"__label__adult": 0.0003712177276611328, "__label__art_design": 0.0007891654968261719, "__label__crime_law": 0.000400543212890625, "__label__education_jobs": 0.001929283142089844, "__label__entertainment": 0.0002994537353515625, "__label__fashion_beauty": 0.00021731853485107425, "__label__finance_business": 0.00033736228942871094, "__label__food_dining": 0.0003707408905029297, "__label__games": 0.0009279251098632812, "__label__hardware": 0.0017576217651367188, "__label__health": 0.0008902549743652344, "__label__history": 0.0005030632019042969, "__label__home_hobbies": 0.0002384185791015625, "__label__industrial": 0.0006151199340820312, "__label__literature": 0.0004091262817382813, "__label__politics": 0.00041031837463378906, "__label__religion": 0.0007038116455078125, "__label__science_tech": 0.290771484375, "__label__social_life": 0.0002856254577636719, "__label__software": 0.1065673828125, "__label__software_dev": 0.58984375, "__label__sports_fitness": 0.0004642009735107422, "__label__transportation": 0.00038743019104003906, "__label__travel": 0.0002884864807128906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27523, 0.04921]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27523, 0.24366]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27523, 0.79768]], "google_gemma-3-12b-it_contains_pii": [[0, 2138, false], [2138, 4760, null], [4760, 6808, null], [6808, 9581, null], [9581, 11091, null], [11091, 13067, null], [13067, 15130, null], [15130, 16856, null], [16856, 18641, null], [18641, 19564, null], [19564, 22034, null], [22034, 23767, null], [23767, 26257, null], [26257, 27523, null], [27523, 27523, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2138, true], [2138, 4760, null], [4760, 6808, null], [6808, 9581, null], [9581, 11091, null], [11091, 13067, null], [13067, 15130, null], [15130, 16856, null], [16856, 18641, null], [18641, 19564, null], [19564, 22034, null], [22034, 23767, null], [23767, 26257, null], [26257, 27523, null], [27523, 27523, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27523, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27523, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27523, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27523, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27523, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27523, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27523, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27523, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27523, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27523, null]], "pdf_page_numbers": [[0, 2138, 1], [2138, 4760, 2], [4760, 6808, 3], [6808, 9581, 4], [9581, 11091, 5], [11091, 13067, 6], [13067, 15130, 7], [15130, 16856, 8], [16856, 18641, 9], [18641, 19564, 10], [19564, 22034, 11], [22034, 23767, 12], [23767, 26257, 13], [26257, 27523, 14], [27523, 27523, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27523, 0.06989]]}
olmocr_science_pdfs
2024-12-12
2024-12-12
b4da8a860c16f12adf3c2a8c7dbfbbc6a7793758
[REMOVED]
{"len_cl100k_base": 7608, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 35962, "total-output-tokens": 11521, "length": "2e12", "weborganizer": {"__label__adult": 0.0007586479187011719, "__label__art_design": 0.0012884140014648438, "__label__crime_law": 0.0006256103515625, "__label__education_jobs": 0.0011444091796875, "__label__entertainment": 0.00016963481903076172, "__label__fashion_beauty": 0.0003767013549804687, "__label__finance_business": 0.0003790855407714844, "__label__food_dining": 0.0006246566772460938, "__label__games": 0.0013399124145507812, "__label__hardware": 0.0278778076171875, "__label__health": 0.0013151168823242188, "__label__history": 0.0007166862487792969, "__label__home_hobbies": 0.0002961158752441406, "__label__industrial": 0.002025604248046875, "__label__literature": 0.0002956390380859375, "__label__politics": 0.0005145072937011719, "__label__religion": 0.0012989044189453125, "__label__science_tech": 0.472412109375, "__label__social_life": 0.00010287761688232422, "__label__software": 0.006092071533203125, "__label__software_dev": 0.477783203125, "__label__sports_fitness": 0.0005965232849121094, "__label__transportation": 0.0016164779663085938, "__label__travel": 0.0003633499145507813}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42863, 0.07246]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42863, 0.32878]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42863, 0.86284]], "google_gemma-3-12b-it_contains_pii": [[0, 2694, false], [2694, 8178, null], [8178, 11749, null], [11749, 14056, null], [14056, 17734, null], [17734, 19492, null], [19492, 23327, null], [23327, 26715, null], [26715, 29959, null], [29959, 32921, null], [32921, 39270, null], [39270, 42863, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2694, true], [2694, 8178, null], [8178, 11749, null], [11749, 14056, null], [14056, 17734, null], [17734, 19492, null], [19492, 23327, null], [23327, 26715, null], [26715, 29959, null], [29959, 32921, null], [32921, 39270, null], [39270, 42863, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 42863, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42863, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42863, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42863, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42863, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42863, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42863, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42863, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42863, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 42863, null]], "pdf_page_numbers": [[0, 2694, 1], [2694, 8178, 2], [8178, 11749, 3], [11749, 14056, 4], [14056, 17734, 5], [17734, 19492, 6], [19492, 23327, 7], [23327, 26715, 8], [26715, 29959, 9], [29959, 32921, 10], [32921, 39270, 11], [39270, 42863, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42863, 0.23567]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
a0c06172d71d7a8e5b727c1d5aa02ae2cc3cbd31
Analysis of Software Development Processes in A Healthcare Facility Onyekwere Eucharia\(^1\), Ogwueleka Francisca Nonyelum\(^2\) \(^1,2\) Computer Science Department, Nigerian Defence Academy Kaduna, Nigeria Abstract - The advent of technology has facilitated the advancement and improvement of healthcare. Health informatics has made it possible for individuals to seek for medical care through various means and ways. The process of making this possible depends on the development of an efficient and effective software to run the healthcare facility. The health care facility is in categories based on the level of care it renders and comprises of multiple aspects such as the patient, doctors and other health workers. Developing a software for health care requires consideration of many factors such as the model of the Systems Development Life Cycle to be adopted in building the software, software risk management standards and the requirements of the facility. This study focused on the analysis of these processes using a privately-owned secondary healthcare facility as a case study. Personal and Telephone interviews were conducted in order to collect data from the software developers and information technologist. Questionnaires were administered on the patients and staff to assess the extent to which the software has been integrated into the work culture of the health facility and its effectiveness. The study was able to analyze the processes used in the development of a secondary Health Care Facility in Kaduna State, Nigeria, based on the results of the questionnaire administered and interviews carried out. Data obtained was analyzed using Statistical Package for Social Sciences version 25 and results shows that 84.4% of the medical staff can easily have access to patients records for retrieval in all the relevant departments but there are no room for adding new features to the software as more requirements are elicited. A healthcare facility is a critical area as such, Risk Management Standards based on international best practice should employed in the development of a software to serve this purpose. Keywords - Healthcare facility, SDLC, Health informatics, Kaduna State, Risk management Standard, Hospital Management System. I. INTRODUCTION Technological innovations have led to breakthroughs in various fields such as Engineering, Energy and Aero Space. As such, the healthcare sector should not be left out in the global trend of technological advances in making life easier and more convenient. Healthcare has evolved from the processes of knowledge acquisition through to the use of equipment and modes of rendering care. Information technology has facilitated the advancement and improvement in quality patient care all over the world. Patients seek medical care through telemedicine and electronic health records; which is an integral part of health informatics that has made it possible and more convenient for patients to access their medical records anywhere in the world. Healthcare informatics is a multidisciplinary field, comprising of medical informatics, nursing informatics and biomedical informatics. Health informatics is an aspect of information management that is related to health (AHIMA, 2014). Hospital equipment such as digital thermometers, x-ray machines, ultra sound scanners, magnetic resonance imaging machines, computerized tomographic machines and a host of others has made medical diagnoses much easier and effective. Hence, software in any medical facility must be able to accommodate all these facets of the healthcare industry, with the sole aim of meeting the patient’s needs and improving the productivity of the health workers. Software meant for any healthcare facility should be able to satisfy the patient’s needs and at the same time create a cohesive work system of all the different fields and departments thereby improving the productivity of the health workers. According to the National Academy of Sciences, in its publication titled “A framework for a Systems Approach to Health Care Delivery” (2001) stated that, “the health care is divided into four nested levels: the individual patient; the care team, which includes the professional care providers (clinicians, pharmacists and others); the organization (hospital, clinic) and; the political and economic environment”. There are many processes involved in the development of a software. According to Dennis et al (2012), developing a software is comparable to building a house, in that the idea must be conceived, planned, analyzed and implemented. In developing a software, there are stages to be followed, which is contained in the System Development Life Cycle (SDLC). The SDLC is a series of processes to be adhered to in the development of a software, consisting of four phases. Dennis et al (2012) described the SDLC as comprising of planning, analysis, design and implementation phases. International Journal of Emerging Technology and Advanced Engineering Each phase in-turn comprises of series of steps that gives an output. Several SDLC models can be modelled during the development of a software, each having its own unique way of ensuring success and efficiency in meeting client’s needs. A. Background of the Study Developing a medical software is paramount in all health care facility globally as quality improvement is high on the agenda of several countries all over the world. According to Curry et al (2006), there is a great interest in problems concerning efficient and effective delivery of safe health care. The large number of countries that have budgeted so much money evidences this fact. Healthcare facility is a location or place where health care is provided (MedlinePlus, 2018). A healthcare facility could be a primary, secondary (specialized) or a tertiary health institution. Most primary healthcare facility are smaller in size and are privately owned such as clinics and Doctors’ offices, although, there are primary health care centers that are owned by government, that provide preventive measures from common health diseases particularly in children. The tertiary health care facilities are much larger comprising of so many departments and units with various large and sophisticated equipment, in that the process of developing a software for such a facility can be complex and would require a lot of integration. The process of developing a software for a healthcare facility is not in any way different from the oldest and extensively used methods of software development and acquisition. The SDLC comprises of various models that can be applied in developing a software project for the hospital. This study looked at two models that can assist in attaining the deliverables, considering the four levels of health care system and the type of healthcare facility in question. The Dialogue Specialist hospital used as a reference point is in the suburb of Kaduna State of Nigeria, within the central part of the state capital. It is a privately owned and modern health care facility well furnished with modern diagnostic machines. The hospital is also equipped with thirty-bed space for admission and observation of patients. The hospital has facility to cater for patients with different kinds of specialist conditions and diseases pertaining to different parts of the body. The different units/departments that are operational are Obstetrics and Gynecology (O&G), Ear, Nose and Throat (ENT) clinic; Surgery department, Family medicine and Pediatric clinic. This hospital has sixty-seven staff in all, including the 9 permanent and 5 visiting doctors and, 14 nurses. Other staff include medical laboratory scientists, Information Technologist, accountants and other Staff needed to run the facility. They admit an average of sixty patients in a month. They also provide outpatient diagnostic services, family planning and immunization services to children below the age of five. B. Statement of the Problem The Hospital Management System is multifaceted in that, at the core of every health care facility is the patient, then moving down to other aspects are the Doctors, and Nurses. You also have, laboratory scientists, Radiographers and other health workers as well as diagnostic equipment used by the doctors in actualizing the goal of providing adequate health care. In addition, the Pharmacy department provides drugs and the accounting section is responsible for the overall finances of the hospital. The different facets of the health care facility are shown in Figure 1. In this part of the world, registering a patient at the point of entry in any hospital can be very clumsy due to the manual system of registration. Trying to retrieve these records when the need arises especially when the patient comes for follow-up is another big problem and can be a herculean task. On the other hand, the health care team, which is the rudimentary building block of any health microsystem, has trouble trying to link patients’ record. The use of software, therefore, has become paramount and differentiator for health care seekers as well as health care providers and medical device producers. Medical device software is safety critical and so when developing a medical software, risk management standards must be maintained. This study, therefore, tries to identify the hindrances and peculiarities associated in the software development processes in a health care facility. Figure 1: Different facets of a health care facility (Imantechsolutions.com) C. Aim and Objectives of the Study The aim of this study is to analyze the processes involved in the development of a software for a healthcare facility. It also seeks to identify the readiness and willingness of private hospital owners to identify with a medical software in managing its healthcare facility. The objectives are: i. To collect data using the questionnaire and interview methods of data collection and stratified sampling technique. ii. To analyse the data collected using Statistical Package for Social Sciences version 25 and discussing the analysis using pie charts and bar charts. iii. To evaluate the results analysed to draw conclusion and make recommendations as deem fit. D. Significance of the Study The findings of this study will reveal the processes involved in developing a software for a healthcare facility with emphasis on various models of the Systems Development Life Cycle. This study will identify which of the models will be most appropriate and effective in developing a software for managing a healthcare facility. E. Scope of the Study This study focused on the analysis of the SDLC models and eliciting the model that will best be suited in the development of a system for a health care facility. The different facets of a health care facility were also illustrated. A specialized private hospital, known as Dialogue Specialist hospital, which is a secondary health care facility, situated in Kaduna metropolis, was used as a case study from which data was collected. F. Limitations of the Study This study is limited to two of the SDLC models; the Waterfall model, which was modelled by the developers in the hospital, used as case study in this research, and the Agile model, which would have been more appropriate that can be used in the development of the software for a healthcare facility. The reference point used as a case study is limited to a privately-owned healthcare facility located in Kaduna State of Nigeria. According to the Ministry of Health in Kaduna State, there are 1011 primary health care centers, 29 secondary health facilities and 6 tertiary health facility. None of the tertiary and primary health facilities uses a software to run its activities, while 4 out of the 29 secondary health care facilities uses a software to run its business. Hence, this hospital was sampled based on purposeful sampling technique due to its central location, number of patients they attend to and the number of staff they have. It is also well organized with ultra-modern equipment. G. Research Questions The research questions used are: i. What are the processes involved in the development of the medical software? ii. What is the impact of the software on the productivity of the health care institution? iii. Is there a software risk management standard in the country? If there is, does the software conform to the standard? iv. Does the software contribute to the ease and quality of patient care? H. Organization of Paper This paper is organized into five chapters. Chapter 1 introduces the study, giving an overview of the subject matter. The statement of the problem aims and objectives, the significance and scope of the study, limitations of the study and research questions were stated. Chapter 2 covers the literature review, which has to do with related work that has previously been carried out by other researchers. It also provides relevant information about the topic in question for better understanding. Chapter 3 gives an insight into the selected methodology that was used in the study. While chapter four presents the data analysis and discussions of the findings. Finally, chapter five summarizes and concludes the study. J. Definition of Terms Biomedical informatics (BMI) – Is an interdisciplinary field that studies the effective use of biomedical data, information and knowledge for scientific inquiry, problem solving, and decision making in order to improve health. Digital thermometers – Are portable temperature sensing devices that have permanent probes and a digital display. Electronic Health Records – Electronic record of health-related information of an individual that can be created, gathered, managed, and consulted by authorized clinicians and staff within one health care organization. Health care facility – Places that provide health care. They include hospitals, clinics, outpatient care centers and specialized care centers. Health care informatics – Use of information technology to analyze health records to improve the outcome of healthcare. It is also known as health information systems. Magnetic resonance imaging – A medical imaging technique used in radiology to form pictures of the anatomy and the physiological processes of the body in both health and disease. Medical informatics – Is the intersection of information science, computer science, and health care. Multifaceted – Having many different aspects or features. Nursing informatics – A specialty that integrates nursing science with multiple information management and analytical sciences to identify, define, manage and communicate data, information, knowledge, and wisdom in nursing practice. Telemedicine – Remote diagnosis and treatment of patients by means of telecommunications technology. II. LITERATURE REVIEW A. Introduction The varieties of health care facilities that are currently in existence is quite encompassing, ranging from small to moderately simple clinics to elaborate multifaceted and expensive research and teaching hospitals. According to Carr (2017), large health centers comprise of several subsidiary and specialized health care types that are of interdependent facilities. These facilities communicate with clients, donors, vendors and staff about the organization and the kind of medical care they render. The best health care facility software should be a set of solutions that will assist in creating a combined work system of all the medical subdivisions, making room for comparisons of medical examination and treatment. When developing a system for a health care facility, there are potential users that need to be considered. These are the staff, patients and the hospital authorities. Ferlie and Shortell (2001) described the health care system as comprising of four nested levels namely; the individual patient, secondly, the care team comprising of the professional health workers (doctors, nurses, laboratory scientists and pharmacists) and family members. Thirdly, the organization that is concerned with management and administrative processes. Lastly, the political and economic environment. Developing a health care model is a multidisciplinary approach that involves the health care professionals, systems analysts and programmers thereby necessitating the application of various techniques such as the SDLC models of systems development. B. Classification of Health care Facilities There is an increasing number of health care types because of a drift towards specialization. The divisions of the healthcare facility into specialized areas and different levels of care facility is because of transition from hospital based curative care to outpatient care and implementation of preventive measures. i) Primary Health Care Facilities: These are the first level facilities that are easily accessible to health seekers. A typical example are clinics that run for just a few hours in a day. According to Ministry of Health, in Nigeria, government usually owns the primary health care centers, which provides preventive health care to its clients. Another type of primary healthcare facility is community healthcare center, which provides initial baseline maternity, accident and emergency care to patients for few hours not exceeding 48 hours prior to discharge or transfer to a larger facility. Community health centers are located mainly in rural areas and small communities. There also privately-owned clinics that fall into this group. ii) Specialized Hospitals: These hospitals render service to definite groups of health care seekers. They accept referrals from primary healthcare facilities, providing specialist care to specific health concerns. Our reference point in this study which is a privately-owned secondary health care facility known as Dialogue Hospital belong to this group. iii) Tertiary Health Care Institution: Tertiary Health care institutions are very large and complex healthcare facilities, providing a vast range of services ranging from education, curative measures and research. They serve as referral centers where difficult and complex health conditions are treated. It comprises of various units and departments that are independent of themselves. C. Processes Involved in the Development of a Software for a Health Care Facility To develop a robust system for any hospital, it is paramount to understand the dynamics and organization of the health care system. The basic requirement for the development of a healthcare facility must firstly, be able to deal with the hospital management system. Secondly, it must always provide correct and appropriate information. Thirdly, the hospital automation system should be flexible and liable to improvement. Fourthly, the user interface should be such that it will be enlightening, easy and convenient to use. Ferlie and Shortell (2001) described the four-nested level model of the health care system as shown in Figure 2.1. The four-nested level model of the health care system are: The Individual patient, The Health Care Team, The Organization and The Economic and Political Environment. i) The Individual Patient: Coddington et al (2001) stated that, any health care establishment that fails to accord the patient its rightful place at the core of its integration efforts is bound to fail. As Ferlie and Shortell (2002) rightly put it, the requirements and desires of the individual patient should be a determining element in patient-centered health care facility. In order to render efficient health care services to its clients, hospitals need to set up systems that can process calls from prospective patients and provide information about the timings of its appointments as well as other activities and services rendered. Health information management should be made an integral part of the system. This will help speed-up patient’s registration process, avoid duplication of client data, secure the storage for easy retrieval and prevent loss of patient’s record. ii) The Health Care Team: According to Ferlie and Shortell (2001), the care team is the second level of the health care system, comprising of the doctors, nurses, pharmacists, laboratory scientists, patient’s relatives and other health care workers. The rudimentary building block of any health micro-system is the care team. The needs of the healthcare team should be put into consideration since they are the major users of the system. Doctors need to devote more time caring for the patient rather than documentation as such all the patient data needs to be in one place so that he can have easy access to the patient’s medical history as well as his test results in order to facilitate accurate diagnosis and make proper prescription. With proper integration of the patient’s medical records, the medical scientists can easily access the recommended laboratory test and enter the results as soon as it is ready. More so, the nurse can look at the treatment schedule and effect it. D. Regulatory Issues Software in most cases are developed according to the needs of the client, while medical device software should be manufactured according to the safety of patients. That is, medical software should be developed according to the software risk management standards for health. For instance, in Nigeria all measuring equipment must conform to the standards organization of Nigeria (SON), which is a member of International Organization for Standardization (ISO). The main function of SON amongst others is to ensure reference standards for calibration and verification of measures and measuring instrument, as well as certification of quality and environmental systems. E. Patient Data Security The electronic mode of storing patient’s data can adversely affect the privacy of patients if there is no data security (Haak et al. 2003). Consequently, software developers must safeguard the privacy of patient information. F. The Systems Development The SDLC consists of series of activities or steps that must be adhered to in the development process of a software. According to Shelly and Rosenblatt (2012), the SDLC comprise of several models that can be adopted based on the requirements of the area or field seeking a solution. In developing a software for any health care facility, there are some basic requirements developers must consider irrespective of the kind of healthcare facility, be it a primary, secondary or a tertiary healthcare institution. The medical software should be such that employees will be able to cope with the management system of the hospital. Secondly, as part of systems security, access should be role-based for every employee according to responsibilities and the hospital authority must have control over the access roles with maximum level security of patient’s personal data. Thirdly, the user interface must not just be flexible and smart; it should also be informative, appropriate and easy to use. It is equally important that, the SDLC model so selected be able to elicit the organization, establishment, management and control of the development activities in relation to health care. Prabu et al (2015) stated that, software developers should adhere strictly to guidelines provided by regulatory bodies in the development of software for medical facilities. Another very important factor is risk assessment, which is a fundamental activity in the formation of medical devices. The process of development is a design for the whole software development. The agile development model, which is an approach to software development whereby the requirements and solutions progress through collective effort of local interactions and cross-functional teams, and the waterfall model which is the oldest model in Software development life cycle. i) The Waterfall model: This is the initial viewpoint used for software development. Shelly et al (2012), describe this approach as the traditional model that consist of five phases, while Powell (2016) illustrates it as a six-phase model. Shelly et al (2012) described each phase as a “deliverable” or output that flows into the succeeding phase. This model described the software development process as a “linear sequential flow” meaning that the next phase begins only after the completion of the previous phase, while the output of the previous phase becomes the input of the next phase and as such do not overlap. Figure 3 shows the different stages of software development using the Waterfall model. **Advantages:** It is simple, easy to comprehend and apply. It is simple to manage because of its rigidity as each phase has specific deliverable. **Disadvantages:** Functioning software is not fashioned out until late during the life cycle, it is very challenging to go back and alter anything once the application is in the testing phase and risk and uncertainty is very high (TRY QA, 2018). There is no customer involvement during the process of development and in the event of any failure, you must start from documentation to the coding, which can be very costly. **Applicability:** The waterfall model is better used in situations where the user requirements are well documented, clean and clear without ambiguity, where enough resources with necessary skill are easily available and the project is short. This approach will not flow in a health care facility due to moderate to high risk of changing requirements; moreover, it is not a good model for complex and object-oriented projects. It is also a poor model for continuing projects. ii). The Agile Model: According to Brian et al (2013), Sutherland and Schwaber first developed this method and it later advanced into a more sophisticated one in the course of time. Prabu et al (2015), described this approach as comprising of iteration of developments known as “sprints” with a first planning step and a final closing phase of sprint review and retrospective as seen in Figure 4. Agile methods according to Rosenbalt (2012) are, the most current techniques that combines iteration and incremental processes in its development. Resulting in small release, with each release building on the previous one. In this approach, the tasks are divided into minute boxes of frame to deliver specific features for release. Features of each build is cumulative, while the final build binds all the features needed by the client. Figure 4 illustrates the Agile method with three incremental stages starting from kick-off up to any required number of incremental. The idea behind this model began early in the development of software and became widespread because of its flexibility. Scrum, Rational Unified Process, Extreme programming and Dynamic Systems Development Method (DSDM) are some of the most widely used Agile methods. Most often, the Agile methodology is compared to the waterfall model, but the Agile method is seen as been better in that, it uses an incremental style where a sample demo is considered with the client and, in order to maintain quality of product in the whole development phase. III. SYSTEM ANALYSIS, METHODOLOGY AND DESIGN A. Introduction This chapter is centred on the research strategy, methods of data collection, sampling technique, ethical considerations and limitations of the study, as well as type of data analysis. B. Research Strategy This research is a survey type of study, which is concerned with the description and effects of the existing system in the hospital. That is, if the software in existence conforms with development processes as well as with the healthcare facility and risk management standards. Moreover, if it serves the purpose for which it is meant. C. Tools and Methods of Data Collection Primary data was collected using the questionnaire and interview methods. Personal interview method was used on the information technologist of the hospital while telephone interview was employed while collecting data from the software developers. D. Sampling Selection A purposive sampling technique was used in selecting the hospital for data collection, out of the four hospitals that are using a software in Kaduna based on convenience. Stratified sampling technique was used in administering questionnaire to the respondents due to non-homogeneity of respondents in order to get representative data. Hence, they were grouped into four, comprising of the information technologist, the software developer, staff and patients. E. Ethical Considerations and Limitations of Study The hospital administration and the respondents were assured that this exercise was purely an academic study and will not, in any way be used for purposes other than that which it is intended. Participants are to remain anonymous as names and identification marks on the questionnaire are not required. This research study was limited to 152 participants, as follows; 1 member of the system development team, 1 information technologist, 50 hospital staff and 100 patients. F. Data Analysis Questionnaire was administered to 150 patients out of which 129 were answered and returned. And, 45 out of the 50 questionnaires issued to the Staff were answered and returned. The data collected was analysed using Statistical Package for Social Sciences. The following personal interview questions were asked: i. What type of software are you using - customised or off-the-shelf? ii. Is the system audited regularly? iii. Does the system have a mobile application? iv. Can the management find out which medical services generates revenue, and which should be closely monitored? The following Telephone interview questions were asked: i. Does the hospital Software comply with the quality and risk management standard? ii. Who and who has access to Patient’s data? iii. How conversant are the medical and other staff with the software? iv. How are updates made, is it on request or there are scheduled updates? v. Which of the software models did you adopt in the development of the software? G. Results of Analysis It was gathered that the software currently in use is a customised solution that was implemented four months ago. Although, they started with an off-the-shelf at the inception of the business five years ago before migrating to the present software. Moreover, the present software is audited every three weeks as they have a retainership contract with the consultants. Staff can access the software and run reports on medical services rendered to patients. Management can evaluate doctors work as well as have knowledge about which of the medical services generates more revenue. On the other hand, the software does not comply with any form of risk management standard and there is no external backup of data. Additionally, the waterfall approach was adopted in developing the software. The software is updated when necessary and the staff are trained at regular schedules. Table I shows the data view of data collated from questionnaire administered to the patients. Table I: Data View of Questionnaire Administered to Patients. Figure 6 is a pie chart showing the portion sizes of how the patients got to know about the existence of the hospital according to the percentages. From the chart, it could be seen that majority (43.4%) knew about the hospital through someone, while 48.1% knew through signpost. However, 4% and 7% knew through website and social media respectively. This goes to show either ignorance of the existence of the software in the hospital by patients or the hospital website is not active due to some technological constraints. Table II answers the question of how they were registered whether manually or digitally using a computer system. From this table, it could be attested that they were registered using a digital system and they are registered within a record time of about 10 minutes. Table III reveals that 33.3 percent of the Patients carry their case note from one point to the other, while a majority of 65.9% do not have to transport their case note themselves. Table IV shows that the hospital has a website that is averagely informative as 57.8% attest to that, while 42.2% of the patients is of the opinion that website has no adequate information. Figure 7 is a pie chart depicting the length of time it takes for a patient to be registered. It could be seen that 73.3% of the staff agree that the patients are registered in less than 10 minutes, indicating that the software can easily be logged into and that, the hospital staff has been trained to use it. Table V. Reveals that 84.4% of the staff can easily access patients record for retrieval especially when they come for follow-up. This is one of the greatest challenges of hospitals that are not information technology compliant. Table IV: <table> <thead> <tr> <th>Is the hospital website informative?</th> </tr> </thead> <tbody> <tr> <td></td> </tr> <tr> <td>Valid</td> </tr> <tr> <td></td> </tr> <tr> <td>Total</td> </tr> </tbody> </table> Table V: <table> <thead> <tr> <th>Can you easily access Patient's records for retrieval?</th> </tr> </thead> <tbody> <tr> <td></td> </tr> <tr> <td>Valid</td> </tr> <tr> <td>No</td> </tr> <tr> <td>Total</td> </tr> </tbody> </table> Out of the 50 questionnaires administered to the hospital staff, 38 agree that they can easily access patients record for retrieval, while less than 10 said that they cannot easily access patients record for retrieval. The staff that cannot access these records might be those that do not necessarily have any business with the patients records. In all hospital setting, the first point of contact for any intending patient is the reception/registration unit. After registration, he/she then goes to the Doctors office for consultation. Thereafter, he/she is then referred to the Nurses for administration of treatment and subsequent documentation of services rendered. If all these group of staff mentioned here do not have access to the database, they will not be able to give medical protocol and add to the information about the patient that already exists. Table VI displays that the Staff that can log onto the data base, make modifications and update patients’ records is of greater percentage (91.1%) than those that cannot access it. This confirms the confidentiality and security of patient’s data to a certain extent. Table VI: <table> <thead> <tr> <th>Can the Doctor/Nurse access the database template to give a medical protocol to the patient and at the same time add information to the patient card.</th> </tr> </thead> <tbody> <tr> <td></td> </tr> <tr> <td></td> </tr> <tr> <td></td> </tr> </tbody> </table> Figure 8 is a bar chart that represents the number of staff who agree that reports can be generated from the system upon request by the patient. ![Bar Chart] Table VII reveals that the clinic has a mobile application that is restricted to just a few of the staff. <table> <thead> <tr> <th>Does the clinic have mobile management application</th> <th>Frequency</th> <th>Percent</th> <th>Valid Percent</th> <th>Cumulative Percent</th> </tr> </thead> <tbody> <tr> <td>Valid</td> <td>Yes</td> <td>31</td> <td>68.9</td> <td>70.5</td> </tr> <tr> <td></td> <td>No</td> <td>13</td> <td>28.9</td> <td>100.0</td> </tr> <tr> <td>Total</td> <td></td> <td>44</td> <td>97.8</td> <td>100.0</td> </tr> <tr> <td>Missing System</td> <td>1</td> <td>2.2</td> <td></td> <td></td> </tr> <tr> <td>Total</td> <td></td> <td>45</td> <td>100.0</td> <td></td> </tr> </tbody> </table> V. CONCLUSION AND RECOMMENDATIONS A. Introduction This chapter summarizes this research and gives conclusion of the overall study. Recommendations and suggestions for further studies was also buttressed in this chapter. B. Summary This research has extensively dwelt on a health care facility and what it takes to make a health care information system more workable and efficient. This research also covered the different types of a health care, which comprises of primary, secondary and tertiary health care facilities. The process of developing a software for a healthcare facility considering its sensitive nature while laying emphasis on the quality, risk management standard and the SDLC. A secondary health care facility was used as a case study in which convenience and stratified sampling techniques were applied in selecting the hospital and sample population respectively. Findings reveal that, the development process has gone a long way in easing off the registration process of patients but much still needs to be desired. C. Conclusion In developing a software for a health care facility, there are considerations that are paramount. Firstly, the type of healthcare facility involved whether primary, secondary or tertiary health care institution. Secondly, the approach must be one that will serve the purpose of the institution. Thirdly, the risk management plan based on the services the hospital provides and standards approved by regulatory bodies should be considered. Finally, Privacy and security of electronic patient record is of paramount importance and should be safeguarded. Hence, protection of confidentiality and integrity of patient information must be guaranteed by software systems. The study was able to analyze the processes used in the development of a secondary Health Care Facility in Kaduna State, Nigeria, based on the results of the questionnaire administered and interview carried out. D. Recommendations This study proposes that, in order to develop a software for any healthcare facility, in-depth analysis of the facility concerned should be carried out in order to elicit the requirements and carry out the appropriate design to suit the purpose of the health care institution. Secondly, while designing the software, risk management Standards based on International Standard Organization should be taken into consideration. REFERENCES
{"Source-Url": "https://ijetae.com/files/Volume9Issue1/IJETAE_0119_08.pdf", "len_cl100k_base": 7325, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 40441, "total-output-tokens": 8703, "length": "2e12", "weborganizer": {"__label__adult": 0.0015544891357421875, "__label__art_design": 0.0009398460388183594, "__label__crime_law": 0.0018205642700195312, "__label__education_jobs": 0.018585205078125, "__label__entertainment": 0.00010120868682861328, "__label__fashion_beauty": 0.0007853507995605469, "__label__finance_business": 0.001883506774902344, "__label__food_dining": 0.001979827880859375, "__label__games": 0.00165557861328125, "__label__hardware": 0.0027408599853515625, "__label__health": 0.0692138671875, "__label__history": 0.0007033348083496094, "__label__home_hobbies": 0.0004014968872070313, "__label__industrial": 0.0013933181762695312, "__label__literature": 0.0007762908935546875, "__label__politics": 0.0006575584411621094, "__label__religion": 0.0010786056518554688, "__label__science_tech": 0.040130615234375, "__label__social_life": 0.0003170967102050781, "__label__software": 0.01027679443359375, "__label__software_dev": 0.83984375, "__label__sports_fitness": 0.0016145706176757812, "__label__transportation": 0.001186370849609375, "__label__travel": 0.0006289482116699219}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 41386, 0.0141]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 41386, 0.33377]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 41386, 0.95064]], "google_gemma-3-12b-it_contains_pii": [[0, 4942, false], [4942, 9656, null], [9656, 14267, null], [14267, 19267, null], [19267, 22389, null], [22389, 26424, null], [26424, 29111, null], [29111, 31184, null], [31184, 32348, null], [32348, 35653, null], [35653, 39347, null], [39347, 41386, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4942, true], [4942, 9656, null], [9656, 14267, null], [14267, 19267, null], [19267, 22389, null], [22389, 26424, null], [26424, 29111, null], [29111, 31184, null], [31184, 32348, null], [32348, 35653, null], [35653, 39347, null], [39347, 41386, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 41386, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 41386, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 41386, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 41386, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 41386, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 41386, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 41386, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 41386, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 41386, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 41386, null]], "pdf_page_numbers": [[0, 4942, 1], [4942, 9656, 2], [9656, 14267, 3], [14267, 19267, 4], [19267, 22389, 5], [22389, 26424, 6], [26424, 29111, 7], [29111, 31184, 8], [31184, 32348, 9], [32348, 35653, 10], [35653, 39347, 11], [39347, 41386, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 41386, 0.13559]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
a626ae4135ca85eb6c4866eb4285eb64f5a573a1
**Abstract** Estimation of distribution algorithms (EDAs) are a type of evolutionary algorithms where a probabilistic model is learned and sampled in each iteration. EDAspy provides different state-of-the-art implementations of EDAs including the recent semiparametric EDA. The implementations are modularly built, allowing for easy extension and the selection of different alternatives, as well as interoperability with new components. EDAspy is totally free and open-source under the MIT license. **Algorithm 1** Estimation of distribution algorithm <table> <thead> <tr> <th>Input: Population size ( N ), selection ratio ( \alpha ), cost function ( g )</th> </tr> </thead> <tbody> <tr> <td>Output: Best individual ( x' ) and cost found ( g(x') )</td> </tr> <tr> <td>1: ( G_0 \leftarrow N ) individuals randomly sampled</td> </tr> <tr> <td>2: for ( t = 1, 2, \ldots ) until stopping criterion is met do</td> </tr> <tr> <td>3: Evaluate ( G_{t-1} ) according to ( g(\cdot) )</td> </tr> <tr> <td>4: ( G^S_{t-1} \leftarrow ) Select top ( \alpha N ) individuals from ( G_{t-1} )</td> </tr> <tr> <td>5: ( f_{t-1}(\cdot) \leftarrow ) Learn a probabilistic model from ( G^S_{t-1} )</td> </tr> <tr> <td>6: ( G_t \leftarrow N ) individuals from ( f_{t-1}(\cdot) )</td> </tr> <tr> <td>7: end for</td> </tr> </tbody> </table> Univariate approaches assume independence among the variables, and a probability distribution is fitted independently to each of them. EDAspy uses independent Gaussian distributions, kernel density estimation (KDE) or categorical probability distributions, depending on the EDA variant and the nature of the data. Multivariate approaches contemplate dependencies between the variables using different probabilistic models. EDAspy uses multivariate Gaussians or different types of Bayesian networks (BNs) \([9]\), corresponding to different EDA versions. **1. Introduction** Estimation of distribution algorithms (EDAs) \([1]\) are a type of evolutionary algorithms \([2]\) in which traditional mutation and crossover operators are replaced by a probabilistic model that is iteratively learned and sampled during the optimization process. EDAs have been successfully applied to a wide range of tasks \([3\text{–}6]\). See \([7]\) for a review on EDAs applied to solve machine learning tasks. In recent meetings within the field of EDAs \([8]\) a need for establishing an EDA reference library has been identified. EDAspy is proposed to satisfy this need for the scientific community working on this topic. In this paper we present a python package in which several EDA implementations are efficiently designed. The different optimizers are easily called and can be tuned in a user friendly mode. Each EDA variant is built using different available modules, which can be customly selected to build a new implementation. These variants can be easily extended and interoperate with new components. **2. Background** Algorithm 1 shows the pseudocode of the EDA baseline. Firstly, random population \( G_0 \) with size \( N \) is sampled (line 1). Secondly, population \( G_{t-1} \) is evaluated (line 3) and ranked (line 4) according to a given cost function \( g(\cdot) \). Thirdly, a probabilistic model is learned from a fraction \( \alpha \) of the best individuals, i.e., the top \( \alpha N \) solutions (line 5). Finally, a new population is sampled (line 6). These four steps are iteratively repeated until the stopping criterion is met. Depending on the complexity of the probabilistic model and the nature of the optimization problem, different EDA variants are identified in the literature. **Acknowledgements** Univariate approaches assume independence among the variables, and a probability distribution is fitted independently to each of them. EDAspy uses independent Gaussian distributions, kernel density estimation (KDE) or categorical probability distributions, depending on the EDA variant and the nature of the data. Multivariate approaches contemplate dependencies between the variables using different probabilistic models. EDAspy uses multivariate Gaussians or different types of Bayesian networks (BNs) \([9]\), corresponding to different EDA versions. **3. Software framework** Fig. 1 represents the high order representation of the previously mentioned modules in EDAspy. In general, an EDA implementation is applied to a cost function to be minimized, and some results are found. There are several EDA implementations available in the library organized in univariate and multivariate modules, but it is also possible to build a customizable implementation by integrating the already available components with other modules (optionally) in the EDA object. Regarding the cost function, there are several benchmarks implemented. In addition, a custom cost function can be used. Once the optimizer has converged, several information and plots can be extracted from the execution. Moreover, although the library has been built modular in order to allow the integration with new custom implementations, the EDA optimizer can be easily extended and built from scratch by the user without using Custom EDA module facilities. EDApy is organized in different modules: - **Benchmarks.** Different test functions for benchmarking and comparing the different optimizers are included. Toy discrete functions such as OneMax [10] and benchmark suites such as IEEE CEC 2014 [11] are included. - **Univariate.** The following univariate approaches in which no dependencies between variables are considered: univariate marginal distribution algorithm (UMDA) for (i) binary [12] (UMDA_B), (ii) categorical (UMDA_C), and (iii) continuous optimization [13] (UMDA_D); (iv) kernel EDA [14] (u_KEDA); and (v) population-based incremental learning algorithm [15] (PBIL). - **Multivariate.** The following multivariate approaches in which dependencies between variables are considered: (i) estimation of Bayesian network algorithm [1] (EBNA), (ii) estimation of multivariate normal algorithm [1] (EMNA), (iii) estimation of Gaussian network algorithm [16] (EGNA), (iv) semiparametric EDA [17] (SPEDA), and (v) multivariate kernel density EDA [17] (m_KEDA), (vi) Bayesian optimization algorithm (BOA) [18] in which a discrete BN, a multivariate Gaussian distribution, a Gaussian BN, a semiparametric BN, a kernel density estimated BN, and a discrete BN are iteratively learned, respectively. - **Custom:** this module includes the different components to build a custom EDA variant and is divided into probabilistic and initialization models. - **Probabilistic model.** The following components are implemented for learning and sampling. Regarding univariate probabilistic models, (i) binary, (ii) discrete, (iii) Gaussian, and (iv) KDE models are considered. Regarding Bayesian networks, (v) Gaussian, (vi) semiparametric, (vii) KDE, and (viii) discrete models are available. Other models include (ix) multivariate Gaussian. - **Initialization model.** Uniform sampling meeting landscape user defined bounds, Latin hypercube sampling [19] and initialization from a given dataset are available to build the first population of the EDA. - **Self-implemented modules.** This includes modules implemented by users that can be integrated into the library. - **Plotting tools.** The tools for graphically representing the probabilistic model embedded by the EDA are included in this module. Fig. 2 shows an example of two different probabilistic models. Panel (a) represents a Gaussian BN, in which dependencies between variables are considered, while panel (b) represent a univariate model, in which no dependencies are considered. Regarding the multivariate EDA implementations, some of the probabilistic models are interfaced to PyBNesian library [20], which uses C++ to speed up the back-end computations. All the algebraic computations in EDApy are computed using numpy library [21], employing C to speed up the back-end computations. Moreover, the parallelization of the optimizer is available by using multiprocessing library [22,23], and can be optionally activated in all the EDA implementations. 4. Related work Although there are several libraries in which different evolutionary algorithms are available, to the best of our knowledge we have not found comparable published libraries with different EDA implementations in python. However, here we list some libraries in which some EDA implementations are available. - **mateda** [24] is a matlab library which allows building multivariate EDAs based on undirected probabilistic models and Bayesian networks. The purpose of the library is different from EDApy. It offers a framework to build a multivariate EDA algorithm by modules, in which different components can be integrated. mateda implements categorical and Gaussian Bayesian networks, multivariate Gaussian distributions, Markov networks and mixtures of Gaussian distributions as probabilistic models. However, semiparametric and KDE Bayesian networks are missed, and the implementations for univariate approaches are omitted. Moreover, the last released version of mateda was in 2020. implementation is expected to be released in the near future. However, approaches. It also allows for building a custom EDA version with some additional probabilistic models. Table 1 summarizes the main differences between the listed libraries. Regarding univariate approaches, inspyred implements UMDA\(_C\) and LEAP plans to integrate PBIL approaches in the near future, compared to the five implemented variants in EDAspy. Regarding multivariate approaches, LEAP will incorporate BOA approach, which is also implemented in EDAspy. The most competitive library is m\_KEDA, which overlaps with some of the implemented multivariate approaches. It also allows for building a custom EDA version with some additional probabilistic models. However, m\_KEDA is implemented in matlab and seems to be no longer updated. 5. Performance analysis In this section we compare the performance of different continuous domain optimizers implemented in EDAspy. For the evaluation three different cost functions (to be minimized) have been selected from the benchmark suite in EDAspy: CEC14\(_3\), CEC14\(_4\) and CEC14\(_8\), where the former is unimodal and the rest are multimodal functions. Section 4 reviewed some existing software for EDAs in different programming languages. In this section we also compare the result found by the UMDA\(_C\) approach implemented in inspyred. Although m\_KEDA and LEAP were also reviewed, the former is implemented in a different programming language, and thus it is not fair to be compared in terms of CPU time, and the latter does not currently include any of the implemented approaches. All the optimizers have been configured equally in order to perform a fair comparison. Hyper-parameters and a more extended tutorial can be found in the original documentation. Since a statistical study is out of the scope of the paper (see [17] for a more complete analysis), we show a runtime and final solutions analysis of the different variants for continuous optimization in EDAspy. Fig. 3 shows the mean best cost found after 5 independent executions. It is generally observed how in the three functions the best approaches are SPEDA, m\_KEDA and EGNA, which find the minimal costs in the benchmarks. Previous analyses have shown that m\_KEDA, SPEDA and EGNA approaches are able to achieve statistically significant improvements in terms of quality of solutions [17]. In the case of the UMDA\(_C\) implementation from inspyred library, a slightly worse result is found in all the three benchmarks compared to the implementation provided in EDAspy. 6. Illustrative examples The following examples are available in the original documentation\(^1\), where different EDAs are applied to different tasks: - Using UMDA\(_C\) for continuous optimization. UMDA\(_C\) is tested on a IEEE CEC 2014 benchmark. - Using SPEDA for continuous optimization. SPEDA is tested on a provided benchmark and several convergence plots are shown. - Using EGNA for continuous optimization. SPEDA is tested on a provided benchmark and the plotting tools module is used to graphically show the probabilistic model embedded into the EDA approach. \(^1\) https://github.com/VicentePerezSoloviev/EDAspy/blob/master/notebooks/CPU\%20time\%20analysis.ipynb. Using EMNA for continuous optimization. EMNA is tested on an IEEE CEC 2014 benchmark. Using UMDA_D for feature selection in a toy example. Given a dataset and a forecasting model, UMDA_D is used to select the best subset of variables that optimizes the accuracy of the prediction. Categorical optimization using EBNA and UMDA_D. A categorical cost function is designed and optimized by EBNA and UMDA_D approaches. Building my own EDA implementation. A tutorial on how to customize an EDA implementation is provided. CPU time analysis. All the continuous domain EDA variants are tested against the same IEEE CEC 2014 benchmark. 7. Conclusions In this paper we present the first python library entirely dedicated to EDA implementations. EDAspy has been shown to be easy to use, and to integrate with custom implementations. Therefore, we hope that EDAspy can speed up the development of research on EDAs and their applications. In addition to maintaining the code and solving bugs found by EDAspy users, future work would include adding more visualization tools for the optimization process and the implementation of other EDA variants. CRediT authorship contribution statement **Vicente P. Soloviev:** Writing – review & editing, Writing – original draft, Visualization, Validation, Software, Project administration, Methodology, Investigation, Formal analysis, Conceptualization. **Pedro Larrañaga:** Writing – review & editing. **Concha Bielza:** Writing – review & editing. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Data availability No data was used for the research described in the article. Acknowledgments This work has been partially supported by the Spanish Ministry of Science and Innovation through the PID2022-139977NB-I00 and TED2021-131310B-I00 projects, and by the Autonomous Community of Madrid within the ELLIS Unit Madrid framework. Vicente P. Soloviev has been supported by the predoctoral grant FPI PRE2020-094828 from the Spanish Ministry of Science and Innovation. Appendix. Required metadata A.1. Current executable software version See Table A.2. A.2. Current code version See Table A.3. ### References Table A.3 <table> <thead> <tr> <th>C</th> <th>Code metadata description</th> <th>Software metadata information</th> </tr> </thead> <tbody> <tr> <td>C1</td> <td>Current code version</td> <td>1.1.4 <a href="https://github.com/VicentePerezSoloviev/EDAspy">https://github.com/VicentePerezSoloviev/EDAspy</a></td> </tr> <tr> <td>C2</td> <td>Permanent link to code/repository used of this code version</td> <td>git</td> </tr> <tr> <td>C3</td> <td>Legal software license</td> <td>python 3.8-3.11</td> </tr> <tr> <td>C4</td> <td>Code versioning system used</td> <td>Compatible python pybnesian, numpy, pandas, scikit,learn, scipy, pgmpy, pyarow, multiprocessing</td> </tr> <tr> <td>C5</td> <td>Software code languages, tools, and services used</td> <td><a href="https://edaspy.readthedocs.io/en/latest/">https://edaspy.readthedocs.io/en/latest/</a> <a href="mailto:vicente.perez.soloviev@gmail.com">vicente.perez.soloviev@gmail.com</a></td> </tr> <tr> <td>C6</td> <td>Compilation requirements, operating environments &amp; dependencies</td> <td></td> </tr> <tr> <td>C7</td> <td>Link to developer documentation/manual</td> <td></td> </tr> <tr> <td>C8</td> <td>Support email for questions</td> <td></td> </tr> </tbody> </table> Vicente P. Soloviev received the M.Sc. degree in Artificial Intelligence from Universidad Politécnica de Madrid, in 2020 and he is currently a Ph.D. student at Universidad Politécnica de Madrid in the Computational Intelligence Group. He teaches some subjects related to Artificial Intelligence for the B.Sc. in Computer Science at Universidad Politécnica de Madrid. His research interests include the areas of probabilistic graphical models, metaheuristics for optimization, quantum machine learning, quantum heuristics, and real applications such as industry 4.0. Vicente holds a pre-doctoral FPI contract awarded by the Spanish Ministry of Science and Innovation since 2021. Pedro Larrañaga received the M.Sc. degree in Mathematics (Statistics) from the University of Valladolid and the Ph.D. degree in Computer Science from the University of the Basque Country (excellence award). He has been a Full Professor in Computer Science and Artificial Intelligence with the Universidad Politécnica de Madrid (UPM), since 2007. Before moving to UPM, his academic career developed at the University of the Basque Country (UPV-EHU) through several faculty ranks: Assistant Professor, from 1985 to 1998, Associate Professor, from 1998 to 2004, and a Full Professor, from 2004 to 2007. He has published over 200 papers in high-impact factor journals. He has supervised over 35 Ph.D. theses. His research interests include the areas of probabilistic graphical models, metaheuristics for optimization, data mining, classification models, and real applications, such as biomedicine, bioinformatics, neuroscience, industry 4.0, and sports. He is Fellow of the European Association for Artificial Intelligence, since 2012, and a Fellow of the Academia Europaea, since 2018. He has received the 2013 Spanish National Prize in computer science and the prize of the Spanish Association for Artificial Intelligence in 2018 and the 2020 Machine Learning Award from Amity University (India). Concha Bielza received her M.S. degree in Mathematics from Universidad Complutense de Madrid, in 1989, and her Ph.D. degree in Computer Science from Universidad Politécnica de Madrid (UPM) in 1996 (extraordinary doctorate award). She is currently (since 2010) a Full Professor of Statistics and Operations Research with the Departamento de Inteligencia Artificial, UPM. Her research interests are primarily in the areas of probabilistic graphical models, decision analysis, metaheuristics for optimization, classification models, and real applications, such as biomedicine, bioinformatics, neuroscience and industry 4.0. She has published more than 150 papers in high impact factor journals and has supervised 22 Ph.D. theses. She was awarded the 2014 UPM Research Prize and the 2020 Machine Learning Award from Amity University (India).
{"Source-Url": "https://cig.fi.upm.es/wp-content/uploads/EDAspy-An-extensible-python-package-for-estimation-of-distribution-algorithms-final.pdf", "len_cl100k_base": 4260, "olmocr-version": "0.1.48", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 18881, "total-output-tokens": 5835, "length": "2e12", "weborganizer": {"__label__adult": 0.0003736019134521485, "__label__art_design": 0.00038504600524902344, "__label__crime_law": 0.0004792213439941406, "__label__education_jobs": 0.002040863037109375, "__label__entertainment": 0.00011670589447021484, "__label__fashion_beauty": 0.00020742416381835935, "__label__finance_business": 0.0003879070281982422, "__label__food_dining": 0.0004935264587402344, "__label__games": 0.0006680488586425781, "__label__hardware": 0.0009245872497558594, "__label__health": 0.00098419189453125, "__label__history": 0.00036716461181640625, "__label__home_hobbies": 0.0001691579818725586, "__label__industrial": 0.0007557868957519531, "__label__literature": 0.00030803680419921875, "__label__politics": 0.0004563331604003906, "__label__religion": 0.0005664825439453125, "__label__science_tech": 0.1534423828125, "__label__social_life": 0.00021779537200927737, "__label__software": 0.01202392578125, "__label__software_dev": 0.8232421875, "__label__sports_fitness": 0.0004596710205078125, "__label__transportation": 0.0006508827209472656, "__label__travel": 0.0002772808074951172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22045, 0.02924]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22045, 0.21927]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22045, 0.84868]], "google_gemma-3-12b-it_contains_pii": [[0, 4297, false], [4297, 9016, null], [9016, 12271, null], [12271, 14522, null], [14522, 20538, null], [20538, 22045, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4297, true], [4297, 9016, null], [9016, 12271, null], [12271, 14522, null], [14522, 20538, null], [20538, 22045, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22045, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22045, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22045, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22045, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22045, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22045, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22045, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22045, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22045, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22045, null]], "pdf_page_numbers": [[0, 4297, 1], [4297, 9016, 2], [9016, 12271, 3], [12271, 14522, 4], [14522, 20538, 5], [20538, 22045, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22045, 0.18182]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
3d9774fa57ddea0c1d2068a1afac0cfefa7df598
SOFTWARE REENGINEERING Ernest M. Fridge III Deputy Chief, Software Technology Branch/PT4 NASA/Johnson Space Center Houston, Texas 77058 (713) 483-8109 Nasamail: EFRIDGE ABSTRACT Today's software systems generally use obsolete technology, are not integrated properly with other software systems, and are difficult and costly to maintain. The discipline of reverse engineering is becoming prominent as organizations try to move their systems up to more modern and maintainable technology in a cost effective manner. The Johnson Space Center created a significant set of tools to develop and maintain FORTRAN and C code during development of the space shuttle. This tool set forms the basis for an integrated environment to reengineer existing code into modern software engineering structures which are then easier and less costly to maintain and which allow a fairly straightforward translation into other target languages. The environment will support these structures and practices even in areas where the language definition and compilers do not enforce good software engineering. The knowledge and data captured using the reverse engineering tools is passed to standard forward engineering tools to redesign or perform major upgrades to software systems in a much more cost effective manner than using older technologies. A beta version of the environment was released in March, 1991. The commercial potential for such reengineering tools is very great. CASE TRENDS magazine reported it to be the primary concern of over four hundred of the top MIS executives. INTRODUCTION Programs in use today generally have all of the functional and information processing capabilities required to do their specified job. However, older programs usually use obsolete technology, are not integrated properly with other programs, and are difficult to maintain. Reengineering is becoming a prominent discipline as organizations try to move their systems to more modern and maintainable technologies. Johnson Space Center's (JSC) Software Technology Branch (STB) is researching and developing a system to support reengineering older FORTRAN programs into more maintainable forms that can also be more readily translated to a modern language such as FORTRAN 8x, Ada, or C. This activity has led to the development of maintenance strategies for design recovery and reengineering. These strategies include a set of standards, methodologies, and the concepts for a software environment to support design recovery and reengineering. This document provides a brief description of the problem being addressed and the approach that is being taken by the STB toward providing an economic solution to the problem. A statement of the maintenance problems, the benefits and drawbacks of three alternative solutions, and a brief history of the STB's experience in software reengineering are followed by the STB's new FORTRAN standards, methodology, and the concepts for a software environment. STATEMENT OF THE PROBLEM Based on trends in the computer industry over the last few years, it is clear that computer hardware, languages, and procedures are not static. The software industry recognizes that a large existing software base must be dealt with as new software engineering concepts and software technologies emerge. The old systems use outdated technology and are costly to maintain. At JSC, as in industry at large, there is a large investment in existing FORTRAN software. These FORTRAN systems do not consistently use modern software practices that can increase maintainability. Yet these systems must be maintained for perhaps the next 20 years. Management is seeking ways to reduce maintenance costs. In the 1960s-70s many FORTRAN programs were developed at JSC, each with its own sizeable software development team, and its own input/output format. These programs could not communicate readily and eventually were "wired" together in a very crude semblance of integration. Standards could not be enforced because FORTRAN did not enforce them and some were not visible by just looking at the code. The problem was aggravated by the lack of training of new developers plus a 50 percent turnover in the very large development staff every two years. In addition, the user organizations had more people doing development than the development group, and these other organizations were not always aware of the standards and support tools available. This history has left JSC with the following problems: - Many programs are large and difficult to understand, resulting in maintenance problems. - The problems in maintenance led to users keeping their own versions of programs, resulting in tremendous duplication. Many of the FORTRAN programs have already been converted from their original dialect of FORTRAN to the FORTRAN 77 standard. Additional conversions will periodically be required even if only to new FORTRAN standards. It is necessary to consider the question, where will that code have to be in five or ten years? Three possible answers come to mind: - FORTRAN 77 is the current standard, but this will be replaced by newer Fortran standards. As vendors stop supporting FORTRAN 77, existing FORTRAN will have to move to the new standard or to another language. - Much of the code may move to the Ada language. This will be particularly true on Space Station Freedom work. - With C being the language of choice for Unix, some of the code might move to the C language. **ALTERNATIVE SOLUTIONS** Three alternative solutions to the problems identified above have been identified: complete redevelopment of the program, code translation to a more modern language or version of a language, and reengineering. Each of these is illustrated in figure 1 and discussed briefly in the following paragraphs. Redevelopment of a system from scratch is very expensive. Redevelopment includes all of the same phases of the life cycle as new development, from requirements through integration and testing. Extensive domain analysis is required, and there is a risk of incomplete requirements. All too often it is reported that a large program will be redeveloped from scratch to a more modern style only to find out that the new developers did not understand all of the functions and necessary information requirements of the existing system. Code translation, especially automatic code translation, costs much less. Some might then ask, why worry about all of this now? We can use a translator when the time comes that we are forced to move the code for- Although this would be a nice solution, the truth is that code translators have proven unsuccessful due to several major reasons: - Poor existing control flow is translated into poor control flow. - Poor existing data structures remain poor data structures. - Input/output translation usually produces hard to read "unnatural" code in the new language. - Translation does not take advantage of the code and data packaging techniques available in the newer languages. Attempts to automatically translate some FORTRAN programs to Ada have failed. Reengineering is the combination of "reverse engineering" a working software system and then "forward engineering" a new system based on the results of the reverse engineering. Forward engineering is the standard process of generating software from "scratch." It is composed of the life cycle phases such as requirements, architectural design, detailed design, code development, testing, etc. In each phase, certain products are required and the activities which produce them are defined. Each product is required to be complete and consistent. To progress forward to a new phase normally requires a new representation of the products which involve more detail such as new derived requirements, design decisions, trade off evaluation between alternative approaches, etc. Finally, code is developed which is the most complete, consistent, and detailed representation of the required product. Reverse engineering is the reverse of forward engineering. It is the process of starting with existing code and going backward through the software development life cycle. Life cycle products are, therefore, obtained by abstracting from more detailed representations to more abstract ones. This process should proceed much faster than forward engineering since all of the details required are available. Reverse engineering starts with the most detailed representation, which has also proven to be complete and consistent since it can currently do the job required. Developing products in reverse involves abstracting out only the essential information and hiding the non-essential details at each reverse step. How far to go backward in the reverse engineering process before it is stopped and forward engineering begins is a critical question and involves trade offs. It is important to understand all of what the program does, all of the information it handles, and the control flow since these are probably required to get the job done. This implies taking the reverse process far enough to understand what the "as is" program is. This is usually more significant than how the program does its job since the how is usually the part that will be changed in any following forward engineering process. What a program does is called its requirements. How it meets those requirements is its design. For a reverse engineered program it is the design that will be updated more often than what the program will do. Modern software engineering techniques and technologies such as user interfaces, database management, memory utilization, data structuring, packages, objects, etc. will affect the design, not what the program does. Therefore, once it is understood what the program does and what is obsolete, then the forward engineering process can begin with confidence. Reverse engineering is referred to as "design recovery" when the reverse engineering process stops at the recovery of the design of the implementation, rather than proceeding on to a higher level of abstraction to include the recovery of the requirements. The basic process of this level of design recovery involves recovery of information about the code modules and the data structures in an existing program. This information will support the programmer/analyst who is maintaining an unfamiliar large FORTRAN program, upgrading it for maintainability, or converting it to another target language. However, a better job of redesigning a program can be accomplished with requirements recovery than with design recovery. To carry the reverse engineering process beyond design recovery to requirements recovery is difficult and requires higher levels of domain knowledge to do the abstractions. The why's of the requirements, design, and implementation can only be provided by someone very familiar with the program and the domain. This level of expertise is often very difficult to find and have dedicated to the reengineering process. For this reason, the methods and tools that the STB has developed initially assume reverse engineering only to the design recovery stage. Future development will be based on feedback from the JSC software engineering community. The current standards, methods, tools, and environment are all designed to be sufficiently flexible and extendible to enable the strategies to be extended to cover the full spectrum of reverse engineering. The overriding philosophy of this planned reverse engineering process is to capture the total software implementation in an electronic form. This includes source code, documentation, databases, etc. Figure 2 illustrates the progression of data structures from COMGEN-compatible code (see section "Software Technology Branch's Reengineering History") to reengineered code. This progression in electronic form ensures that the total consistent and complete requirements representation is available. Software tools are provided to support the generation of the more abstract products required for engineering in reverse as well as capturing rationale and decisions of the engineer. By the continuing process of abstracting the information about the program into the different representations, the engineer can remain more confident that information is not being lost or inadvertently "falling through the cracks." SOFTWARE TECHNOLOGY BRANCH'S REENGINEERING HISTORY In the early 1970's, the Mission Planning and Analysis Division's (MPAD) Software Development Branch and TRW/Houston developed a tool, called COMGEN, that began as a COMMON block specification statement generator. It grew to include many other functions as new techniques were developed. Later COMGEN was broken up into a continually evolving set of tools with common data interface structures. This tool set supports the maintenance of FORTRAN programs today on Unisys and multiple Unix systems. People still refer to this tool set as COMGEN tools, and a program that complies with the MPAD standard COMMON concept as a COMGEN-compatible program. [1,2,3] In the 1970's, MPAD performed a lot of software reengineering to meet the goal of combining many of the independently developed engineering programs, each with its own input/output formats. Many of the modern concepts such as separation of input/output processing from the applications, databases, data structures, packages, generics, objects, etc. were recognized and simulated to some degree. They were not called by the modern names, of course, but the design engineers were trying to do good engineering, modularization, and data handling. Even though these techniques were known in the 1970's, they are just now really becoming popular because of newer technologies such as database management systems, user interface tools sets, and modern languages that actually embed and enforce good software engineering practices. In the late 1980's, some of the personnel and the functions of the Software Development Branch were reorganized into the newly created Software Technology Branch (STB). The STB's reengineering history has put JSC in a better position with respect to the maintainability of its older software than many other organizations. The positive results of this experience include the following: - Most of the software is reasonably modular. - The data has some structure. - Most of the software at JSC is reasonably compatible with the STB's tools, including the in-line documentation. - The large complex programs that support many simulations have considerable software reuse and information sharing. MAINTENANCE STRATEGIES The strategies presented in this document are intended to help with design recovery in support of programmer/analysts who are required to maintain large FORTRAN programs that they did not develop. In addition, these strategies are intended to support reengineering of existing FORTRAN code into modern software engineering structures, which are then easier to maintain and which allow a fairly straightforward translation into other target languages. The STB is proposing standards, methods, and an integrated software environment based upon the significant set of tools built to develop and maintain FORTRAN code for the Space Shuttle. [4,5,6,7,8] The environment will support these structures and practices even in areas where the language definition and compilers do not enforce good software engineering practices. New FORTRAN Standards New standards, which allow modern software engineering constructs to be used in FORTRAN 77, have been defined by the STB. [5] These standards are added to existing standards defined by the former MPAD and still in use in the mission planning and analysis domain. The goal of the new standards is to improve maintainability and permit relatively automated translations to newer languages. In table 1, the standards and their benefits are summarized. These standards address documentation, longer variable names, modern control flow structures, grouping subprograms together as virtual packages, data structuring, and input/output encapsulation in separate subprograms. Where FORTRAN 77 does not provide the constructs, virtual constructs are provided along with a tool environment to support their development and maintenance. The existing core of FORTRAN programmers should have little problem with the standards and new FORTRAN code should adhere to them from the start. Table 1. Standards Summary <table> <thead> <tr> <th>Standard</th> <th>Benefit</th> </tr> </thead> <tbody> <tr> <td>Documentation</td> <td></td> </tr> <tr> <td>Header statement before code blocks</td> <td>Understandability</td> </tr> <tr> <td>Requirements in CD1 statements</td> <td>Understandability and traceability</td> </tr> <tr> <td>Rationale in CD7 statements</td> <td>Design knowledge capture</td> </tr> <tr> <td>Virtual package identification</td> <td>Maintenance</td> </tr> <tr> <td>Longer, more meaningful variable names</td> <td>Understandability</td> </tr> <tr> <td>Modern control flow structures</td> <td>Maintenance and understandability</td> </tr> <tr> <td>Block DO</td> <td></td> </tr> <tr> <td>DO WHILE</td> <td></td> </tr> <tr> <td>Grouping subprograms into virtual packages</td> <td>Higher level of abstraction, understandability</td> </tr> <tr> <td>Data structuring</td> <td></td> </tr> <tr> <td>Preferred use of calling parameters</td> <td>Maintenance</td> </tr> <tr> <td>Controlled use of COMMON blocks</td> <td>Maintenance</td> </tr> <tr> <td>INCLUDE</td> <td>COMMON database concept</td> </tr> <tr> <td>Preferably encapsulate input/output in separate subprograms</td> <td>Maintenance and support to future conversions</td> </tr> </tbody> </table> Design Recovery and Reengineering Methodology The reengineering methodology defines the steps, the skills required, and guidelines on how far to reverse engineer before deciding to rebuild. The key goal is to update to modern technology and software engineering concepts without losing required functions and data. Methods are provided that have the flexibility to meet multiple levels of conversion, each of which improves maintainability. Figure 3 illustrates five methods. Method 1 converts an arbitrary FORTRAN program to COMGEN-compatible FORTRAN, which provides in-line documentation, data structure, and unique data names within a COMMON structure. Method 2 converts software already in this format to the new "standard" FORTRAN with a more Ada-like structure that is ready for a mostly automated translation by Method 3 to a target language that embeds software engineering principles. Alternatively, COMGEN-compatible programs can be converted directly to a target language like Ada by Method 4. Although it is easier to convert a FORTRAN program when the code already meets the standard COMMON concept, commonly known as COMGEN-compatible, arbitrary FORTRAN can be directly converted to a target language by Method 5. ![Diagram of reengineering methods] Figure 3. Reengineering Methods Environment to Support Design Recovery and Reengineering The STB's reengineering environment [7] is being built around three components: standards, methods, and tools that support the standards and the methods. It contains modified versions of the tools used to support the current JSC FORTRAN programs plus commercial off-the-shelf (COTS) tools and additional custom-built tools. The intent is to get an environment out into use in JSC's maintenance community to provide support for upgrading FORTRAN programs in terms of maintainability in the near-term, then to extend the functionality of the tool set and environment in response to feedback from the programmers/analysts. Currently about eight groups at JSC are using the tools. Some support for the C language exists and a cooperative agreement with the Microelectronic and Computer Technology Corporation (MCC) is evaluating research into design recovery of C programs. The environment has been designed with stable interfaces defined to provide for the maximum degree of seamlessness that is desirable. It is doubtful that COTS tools can be integrated seamlessly into the environment as no standard interfaces have yet been established for either user interface or data interface (as opposed to data exchange). The tools are integrated at the front end by a user interface and behind the screen by two logical databases, one containing data passed to and from the tools and the other containing the original and modified source code as shown in figure 4. CASE framework tools are being evaluated as possible integration mechanisms. The environment will not be a completely automated environment since much work will still have to be done by a programmer/analyst. A person must be in the loop to provide the required puzzle-solving skills that are beyond the capabilities of state-of-the-practice tools. However, as an experience base is accrued in design recovery and reengineering, knowledge-based capabilities can be added to the environment. Version 1 of the environment called REengineering APplications (REAP) was delivered in June, 1991. This integrated all existing JSC supported tools listed above, behind a common user interface built on the MOTIF standard. It contains major elements of all subsystems and encapsulates the capabilities that have been developed and used at JSC during the last fifteen years. A version with improved tool integration, user interface enhancements, and the commercial LOGISCOPE tool was delivered in October, 1991. The Fortran design recovery version should be available in February, 1992. MCC should also have delivered an evaluation prototype of a design recovery capability for the C language by that time. In parallel, the study of using CASE framework standards and tools to better integrate and manage this environment should be completed early in 1992 and the version 2 series will be delivered on one of these platforms. The plans and design of REAP are such that all deliveries containing COTS products will be tailorable so that users can delete the COTS tools that they do not want to license. This policy even includes the framework integration tools. In most cases, similar functions might still be available but they would have less capability. CONCLUSIONS JSC has a large amount of existing code in FORTRAN that embodies domain knowledge and required functionality. This code must be maintained and eventually translated to more modern languages. Three primary alternative solutions have been identified to address the maintenance problems of these old FORTRAN programs: complete redevelopment of the programs, code translation to a more modern language or version of a language, and reengineering. Complete redevelopment is effective but very costly. Simple code translation is cheap, but usually ineffective since seldom do the old systems incorporate modern software engineering concepts such as good data structuring, good control structuring, packages, objects, etc., that should be present in the new system. Modern languages such as Ada have constructs for representing these features, but translators cannot determine these features in the original code to map them into the new system. Reengineering is being recognized as a viable option because the old systems, in spite of obsolete technology, do contain all of the required functionality and can get the job done. However, at the present time there are only a few expensive Computer Aided Software Engineering (CASE) tools and no total system environment available in the COTS market to support reengineering FORTRAN programs. The STB maintenance strategies provide standards, methods, and a tool environment for upgrading current FORTRAN systems without losing the embedded engineering knowledge and at a lower cost than for complete redevelopment of the program. A useful environment for reengineering FORTRAN software can be built fairly quickly by building upon the existing FORTRAN development and maintenance tools, COTS products, new software and hardware technologies, plus current research into reuse, design recovery, and reengineering. This environment will support reengineering existing FORTRAN code into more maintainable forms that can also be readily translated into a modern language including newer versions of FORTRAN. Two versions of the environment were delivered in 1991 which integrate the existing JSC tools plus the commercial LOGISCOPE tool behind a common MOTIF user interface. A Fortran design recovery capability should be available in February, 1992 and the MCC should deliver a design recovery prototype for the evaluation of design recovery in the C language by that time. Plans are to integrate this capability on a CASE framework tool during 1992. GLOSSARY arbitrary FORTRAN FORTRAN program that is not compatible with the COMGEN standards long in place for JSC's mission planning and analysis domain. COMGEN-compatible FORTRAN program that is compatible with the COMGEN standards long in place for JSC's mission planning and analysis domain. [1] COTS Commercial-Off-The-Shelf design recovery Reverse engineering, the first step for maintenance or reengineering. environment Instantiation of a framework, i.e., an integrated collection of tools. It may support one or more methodologies and may also provide a framework for third party tools. framework Software system to integrate both the data and the control of new and existing tools; usual components include a user interface, object management system, and a tool set. FORTRAN 77 ANSI standards for FORTRAN in effect in June 1990. FORTRAN 8x Future ANSI standards for FORTRAN; expected to be approved and released soon; draft standards have been circulated; unofficially called FORTRAN 90. forward engineering Process of developing software from "scratch," through the phases of requirements, design, and coding. package "A collection of logically related entities or computational resources" (Booch[9]). reengineering "The examination and alteration of a subject system to reconstitute it in a new form and the subsequent implementation of the new form" (Chikofsky and Cross [10]); combination of reverse engineering and forward engineering. reverse engineering "The process of analyzing a subject system to identify the system’s components and their interrelationships and create representations of the system in another form or at a higher level of abstraction" (Chikofsky and Cross [10]); the first step of maintenance or reengineering; reverse of forward engineering; process of starting with existing code and going backward through the software development life cycle. **software maintenance** Process of modifying existing operational software while leaving its primary functions intact (Boehm [11]). **subject program** Program that is being maintained or reengineered. **virtual package** Package concept as defined by Booch [9], but implemented either in Ada, which enforces the concept, or in a language in which the concept must be supported procedurally. **REFERENCES**
{"Source-Url": "https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19920013465.pdf", "len_cl100k_base": 5171, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 30017, "total-output-tokens": 5940, "length": "2e12", "weborganizer": {"__label__adult": 0.0002332925796508789, "__label__art_design": 0.0001958608627319336, "__label__crime_law": 0.00022721290588378904, "__label__education_jobs": 0.0005054473876953125, "__label__entertainment": 3.868341445922851e-05, "__label__fashion_beauty": 0.0001036524772644043, "__label__finance_business": 0.00026488304138183594, "__label__food_dining": 0.00020766258239746096, "__label__games": 0.0003616809844970703, "__label__hardware": 0.0008220672607421875, "__label__health": 0.00024008750915527344, "__label__history": 0.00016188621520996094, "__label__home_hobbies": 5.2988529205322266e-05, "__label__industrial": 0.0002815723419189453, "__label__literature": 0.00014710426330566406, "__label__politics": 0.00012159347534179688, "__label__religion": 0.0002256631851196289, "__label__science_tech": 0.01088714599609375, "__label__social_life": 4.64320182800293e-05, "__label__software": 0.00972747802734375, "__label__software_dev": 0.974609375, "__label__sports_fitness": 0.000171661376953125, "__label__transportation": 0.0002903938293457031, "__label__travel": 0.00012564659118652344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28876, 0.01672]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28876, 0.70153]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28876, 0.94049]], "google_gemma-3-12b-it_contains_pii": [[0, 3514, false], [3514, 6543, null], [6543, 8907, null], [8907, 12334, null], [12334, 14564, null], [14564, 18316, null], [18316, 20757, null], [20757, 23459, null], [23459, 26746, null], [26746, 28876, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3514, true], [3514, 6543, null], [6543, 8907, null], [8907, 12334, null], [12334, 14564, null], [14564, 18316, null], [18316, 20757, null], [20757, 23459, null], [23459, 26746, null], [26746, 28876, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28876, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28876, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28876, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28876, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28876, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28876, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28876, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28876, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28876, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28876, null]], "pdf_page_numbers": [[0, 3514, 1], [3514, 6543, 2], [6543, 8907, 3], [8907, 12334, 4], [12334, 14564, 5], [14564, 18316, 6], [18316, 20757, 7], [20757, 23459, 8], [23459, 26746, 9], [26746, 28876, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28876, 0.15044]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
9355cabd9491b8b9c206f079a08747c680c80427
Introduction to the C-Script Block Implementation of a digital and analog PI controller Last updated in tutorials release 1.0 www.plexim.com - Request a PLECS trial license - Check the PLECS documentation 1 Introduction The C-Script block is a versatile tool in the PLECS component library that can be used for implementing custom controllers and components. The advanced capabilities of the C programming language combined with the flexible sample time settings in the C-Script block allow almost any custom component model, from a simple mathematical function to a complex state machine, to be implemented. The C-Script block can also simplify the workflow when writing C code for DSP controllers since the code can be reused for the DSP. The key skills that you will learn in this exercise are: - Understand how the C-Script block interfaces with the simulation engine through function calls. - Understand the different time settings available in the C-Script block. - Use the C-Script block for implementing a mathematical function. - Use the C-Script block for implementing a discrete and continuous PI controller. Before you begin Ensure the file buck_converter.plecs is located in your working directory. You should also have the reference files that you can compare with your own models at each stage of the exercise. 2 Function Call Interface The C-Script block interfaces with the simulation engine using a number of predefined function calls. These function calls are depicted in Fig. 1. Each function call corresponds to a code window in the C-Script editor, which is accessed from a pull-down menu. The most commonly used code windows are described below, and for a complete description, one can refer to the PLECS User Manual or the block's documentation, which is accessed by clicking the Help button. - **code declarations()** In addition to the function windows, a Code declarations window is provided for defining global variables, macros and helper functions to be used in the C-Script block functions. The code declarations code is essentially a global header file. All variables and functions defined within this window are globally visible to all functions within the C-Script block. - **start()** The Start function code window is for initializing the simulation. Internal state variables, for example, should be initialized here. - **output()** The Output function code window is designed to contain the functional code for the C-Script block. A call is made to this function at least once during each simulation time step. For this reason, any internal states or persistent variables should be updated in the update function. - **update()** If a model contains discrete internal states, a call is made to the code in the Update function code window directly after the output function has been executed. When the C-Script contains internal states, they should be updated in this function rather than in the output function to ensure they are updated only once during each time step. 3 Parameters When you open the C-Script block the Setup tab of the code editor window as shown in Fig. 2 will appear. In this window you can configure the parameters. You will actually write your C code within the function windows contained in the Code tab. 3.1 Sample time parameter The Sample time setting is a key parameter that controls when the C-Script block is called. The sample time can be inherited from the simulation engine or controlled by the C-Script block itself. A description of the possible sample time settings is given below: Introduction to the C-Script Block Figure 1: Function calls made during operation of the C-Script block. The update function is called when discrete states are defined and the derivative function is called when continuous states are defined. Figure 2: C-Script editor window for configuring parameters and writing code. Continuous The continuous time setting is selected by entering 0 into the sample time dialog. With the continuous time setting, the time steps are inherited from the solver. Every time the solver takes a step, the C-Script block is executed. Discrete The discrete-periodic time setting is selected by entering a positive number into the sample time dialog. The C-Script block is executed at discrete regular intervals defined by this sample time. Variable The variable time setting is selected by entering -2 into the sample time dialog. With the discrete-variable time setting, the next time step is determined dynamically by the C-Script block itself by setting the NextSampleHit built-in macro. The NextSampleHit must be initialized at the beginning of the simulation to a value greater than or equal to the CurrentTime macro. See below for more information on macros in the C-Script block. 3.2 Other parameters The other parameters are described completely in the C-Script block’s documentation. However, it is worth noting the following at this stage. When creating a C-Script block that contains static variables, you can add discrete states to create global static variables. The discrete states are accessed using a macro command DiscState. ### 3.3 List of commonly-used macros The C-Script block contains a number of built-in macro functions that can be used to interact with the model or solver. Some of the commonly used macros are: - **InputSignal**(\(j, i\)) Reference the \(i^{th}\) signal of the \(j^{th}\) C-Script block input. - **OutputSignal**(\(j, i\)) Reference the \(i^{th}\) signal of the \(j^{th}\) C-Script block output. - **DiscState**(\(i\)) Reference a discrete state with index \(i\). - **NextSampleHit** Set the next call time for the C-Script block. This variable is used when the variable sample-time setting is active. - **CurrentTime** Retrieve the current simulation time. - **SetErrorMessage**("msg") Abort the simulation with an error message. ### 4 Exercise: Implement a Mathematical Function In this exercise, you will use the C-Script block to implement the sine function with an offset value. **Your Task:** 1. Create a new simulation model, and place a C-Script component and two Constant source blocks into it. Label the first Constant block “Offset” and set its value to 0.5. Label the second Constant block “Frequency” and set its value to \(2\pi \cdot 50\). Use a Signal Multiplexer block to route the two constant values into the C-Script block. Your simulation model should look like that shown in Fig. 3. ![Simulation Model](image.png) **Figure 3:** Implementing the function \(y = 0.5 + \sin(2\pi 50 \cdot t)\) with the C-Script block. 2. You will then need to configure the C-Script block and write the code. To configure the C-Script block, open the block by double-clicking and in the **Setup** tab set the **Number of inputs** to 2 and the **Number of outputs** to 1. Set the **Sample time** setting to 0 to select a continuous, or inherited sample time. To write the sine function you will need to use the cmath library *(math.h header)*. In the **Code declarations** window of the **Code** tab, enter the following code: ``` #include <math.h> #define offset InputSignal(0,0) #define freq InputSignal(0,1) ``` In the **Output function code** window, enter the following code to create the sine function: ``` OutputSignal(0,0) = sin(freq*CurrentTime) + offset; ``` 3. Run the simulation: Set the simulation parameters of the PLECS solver to the following: • Simulation stop time: \(20 \times 10^{-3}\) ms. • Maximum step size: \(1 \times 10^{-4}\) s When you run the simulation, you should see a sine wave with a period of 20 ms and a vertical offset of 0.5 V. At this stage, your model should be the same as the reference model, sine_wave.plecs. ### 5 Exercise: Implement a Digital PI Controller In this exercise, you will replace a continuous proportional-integral (PI) voltage controller for a buck converter with a digital PI controller. The continuous PI voltage controller for the buck converter is depicted in Fig. 4. The continuous PI control law is described by the function: \[ y(t) = k_p e(t) + k_i \int_0^t e(\tau) \, d\tau \] (1) In order to implement a digital PI controller using the C-Script block, you will need to use a discrete form of the PI control law. The simplest way to discretize the PI controller is to use the backwards rectangular rule to approximate the integral term with: \[ i_k = i_{k-1} + T_s e_k \] (2) where \(i_k\) is the value at sample number \(k\) and \(T_s\) is the sample time. Thus the digital PI control law becomes: \[ y_k = k_p e_k + k_i i_k \] (3) ![Figure 4: Continuous PI voltage controller.](image) #### 5.1 Configure the C-Script block ☑️ **Your Task:** Open the buck converter model buck_converter.plecs and look at the implementation of the continuous PI voltage controller. Save a copy of the buck converter model before you proceed with the following steps: 1. Look under the mask of the PI controller by right-clicking on the component and selecting **Look under mask** (or use Ctrl+U) and delete all components except for the input and output ports. Place a C-Script block directly between the input and output ports. Ensure the **Number of inputs** and **Number of outputs** in the C-Script block settings are both set to 1. 2. Add a parameter, **Sample frequency** \(f_s\), to the PI controller mask and set its value to \(25 \times 3\) Hz. To add a parameter to the PI controller mask, right-click on the mask and select **Edit Mask...** (or use Ctrl+M). In the C-Script parameters, set the **Sample time** setting to $1/f_s$. This will cause the C-Script block to execute at a discrete-periodic, or fixed sample rate of $1/f_s$. The C-script code requires access to the parameters $k_p$, $k_i$ and $T_s$. To pass these directly to the C-Script block, enter them in the **Parameters** box that is displayed in the **Setup** tab. Enter the variables $k_p$, $k_i$, $1/f_s$ into the **Parameters** box. Switch to the **Code** tab and in the **Code declarations** function define the following variables ```c static double kp, ki, Ts; ``` In the **Start** function assign the input parameters to the defined variables: ```c kp = ParamRealData(0,0); ki = ParamRealData(1,0); Ts = ParamRealData(2,0); ``` In the Code declarations function, also map the error input signal to the variable $e_k$: ```c #define ek InputSignal(0,0) ``` ### 5.2 Implement the digital control law using a discrete state **Your Task:** The control code should be written in the **Update** function, since this is only called once per sample period. On the other hand, the **Output** function is typically called several times per sample period. Therefore, any discrete states such as integrator values that are calculated in the **Output** function will be incorrect. 1. In the C-Script settings, set the **Number of disc. states** to 1. This creates a static internal variable named `DiscState(0)` and causes the solver to invoke the **Update** function once every sample period. 2. In the **Code declarations** function, define a global variable to represent the controller output, and map the discrete state to a variable that represents the previous integrator value, $i_{k-1}$. ```c double yk; #define ik_1 DiscState(0) ``` Initialize $i_{k-1}$ to 0 in the **Start** function. Note that $y_k$ needs to be a global variable since it is accessed in both the **Update** and **Output** functions. 3. In the **Update** function, define the variable `double ik`, which is used to store intermediate results. Then implement the control law defined in Eq. (2) and (3). Don’t forget to add $i_{k-1} = ik$; after calculating $ik$. 4. In the **Output** function, assign the result of the control law calculation, to the output `OutputSignal(0,0) = yk`. Note that the output can only be written to in the **Output** function. When you run the simulation the output voltage should be the similar to the model with the continuous PI controller. **Note:** At this stage, your model should be the same as the reference model, `cscript_controller_1.plecs`. www.plexim.com Note: The output of the digital PI controller is delayed by one cycle because the Update function is called after the Output function, as shown in Fig. 1. The Output function therefore outputs the result calculated in the previous time step. The exact sequence of function calls for this simulation is depicted in Fig. 5. The number of calls to the Output function per time step is determined internally by the solver. For this particular model, the Output function is called twice during each major time step. However, for other models, the Output function may be called more often. Figure 5: Timing of function calls for cscript_controller_1.plecs. Eliminate the one cycle delay Your Task: The one cycle delay can result in instability if the sample frequency is too low. To observe this effect, change the sample frequency to \(10^3\) Hz and rerun the simulation. To eliminate the delay, ensure the control result is output in the same time step it is calculated. 1. Create a global variable, double ik, in the Code declarations function. 2. Remove the control code from the Update function except for the line updating the discrete state, \(ik_1 = ik\); 3. Shift the control code to the Output function: \[ ik = ik_1 + Ts*ek; \] \[ OutputSignal(0,0) = kp*ek + ki*ik; \] In other words, the integral action is calculated in the Output function, but the running total, recorded by the discrete state, is not updated until the Update function. At this stage, your model should be the same as the reference model, cscript_controller_2.plecs. 5.3 Implement the continuous control law Although the primary function of the C-Script block is for implementing complex functions and discrete controllers, it allows continuous states and differential equations to be defined for solving ordinary differential equations of the form \(\dot{x} = f(x)\). The simulation engine solves the differential equation using numerical integration. Since the integrator in Eq. (1) can be described by an ordinary differential equation: \[ \frac{di}{dt} = e(t) \] the integral action can be modeled in the C-Script block by defining a continuous state, \(i(t)\), and the differential equation Eq. (4). At each time step the solver will calculate \(i(t)\) numerically. For each continuous state that you define, the following macros are created: ContState(i) and ContDeriv(i). These macros are the hooks that allow the simulation solver to solve the differential equation. All you need to do is describe the equation in the Derivative function. **Your Task:** 1. Create a copy of the model cscript_controller_2.plecs and reconfigure the C-Script settings. Set the Sample time setting to 0 and remove the parameter 1/fs. Set the Number of disc. states to 0 and the Number of cont. states to 1. The continuous state will be used to represent the integral term in Eq. (1). 2. In the Code declarations function, change the InputSignal(0,0) mapping to e and map the continuous state and derivative to variable names with the following: ``` #define e InputSignal(0,0) #define I ContState(0) #define I_deriv ContDeriv(0) ``` 3. In the Derivative function, you need to enter the differential equation that describes the integrator. This is \( I = \int e(t) \, dt \) or \( dI/dt = e(t) \), therefore enter \( I\_deriv = e; \). The solver will then solve the differential equation to yield the integrator value, \( I \). 4. The appropriate initial value for the integrator value \( I = 0 \) is set in the Start function. 5. Remove all code from Update functions, and in the Output function, remove all code except for the following: ``` OutputSignal(0,0) = kp*e+ki*I; ``` When you run the simulation, you should see the same output voltage as with the original continuous PI controller. At this stage, your model should be the same as the reference model, cscript_controller_4.plecs. **When to use a continuous state** In this example, implementing an integrator by creating a continuous state and defining a differential equation was more work than using the integrator component itself. However, working with continuous states inside the C-Script block allows you to add advanced functionality to differential equations or state-space systems. For example, you can implement an integrator that resets itself when its output reaches a certain value. This is not possible using a standard integrator component with a comparator, since feeding back the comparator output to the integrator reset port creates an algebraic loop. **6 Advanced Exercise: Implement a Digital PI Controller with Calculation Delay** In Section 5.2 you implemented a PI controller without a calculation delay. In a practical system, a finite delay exists due to the time needed for the controller to read the input(s), perform the control calculation and write to the output(s). This delay can degrade the stability for certain systems. To simulate this calculation delay, a delay time is introduced before the control result, \( y_k \), is written to \( \text{OutputSignal}(0,0) \). **Your Task:** 1. Save a copy of the model `cscript_controller_2.plecs` and add an additional parameter to the voltage controller mask labeled **Calculation delay**. Assign this to a variable named \( t_d \) and set its value to 0.1. This will be used to set the calculation delay time to \( 0.1T_s \). 2. In the C-Script block settings, add the argument \( t_d/f_s \) to the list of **Parameters** in the **Setup** tab and define a variable \( T_d \) in the **Code declarations** function: ```c static double Td; ``` and assign the value \( T_d = \text{ParamRealData}(3,0) \) in the **Start** function. 3. To implement the calculation delay, you will first need to implement a hybrid discrete-variable, sample time setting. The fixed-step setting will provide a sample hit at the beginning of each period and the variable time step will provide a hit after the calculation delay. Hybrid time settings must be entered as a matrix format, where the first entry in a row is the sample time and the second entry is the offset time. Enter the following **Sample time** setting: \[ \begin{bmatrix} 1/f_s, & 0; \\ -2, & 0 \end{bmatrix} \] 4. To ensure the first hit time is generated by the fixed time step setting, you should initialize the `NextSampleHit` macro, which defines the variable step hit time, to a large number in the **Start** function: \( \text{NextSampleHit} = \text{NEVER} \); 5. Note that you will need to define `NEVER` as a very large number in the **Code declarations** function. If you include the file `<float.h>` you can define `NEVER` as `DBL_MAX`, the largest machine representable float number. 6. At the beginning of the switching cycle you will need to carry out the control calculations for \( i_k \) and \( y_k \). The calculated control action, \( y_k \), is not output until the next call to the **Output** function, which will occur at the time \( \text{CurrentTime} + T_d \). Add the following lines in the **Update** function: ```c if (NextSampleHit == NEVER) //beginning of switching cycle { //Control calculations for ik, ik_1, yk here NextSampleHit = CurrentTime + Td; } else NextSampleHit = NEVER; ``` 7. In the **Output** function, assign \( y_k \) to the output port in order to output the control action that was calculated at the beginning of the switching cycle. At this stage, your model should be the same as the reference model, `cscript_controller_3.plecs`. To observe the influence of the calculation delay, set \( f_s \) to \( 100\text{e}3 \text{ Hz} \) and run the simulation for a calculation delay of 0.1 and 0.9. Note that this implementation only allows values \( t_d \in [0,1] \), the treatment of the special cases 0 and 1 is left for the user as an additional exercise. **Conclusion** In this exercise you learned how to use the PLECS C-Script block to implement a custom digital PI controller with several adaptations. This involved having an understanding of the function calls that are predefined in the block and are used in order to interface with the simulation engine. Another im- portant aspect of the C-Script block is understanding the different time settings that are available for configuration in the block, which are crucial for ensuring certain desired behavior such as dynamic step size calls for timing functions or mimicking a fixed-step controller. The PLECS C-Script block is highly versatile and can be used to model elaborate controllers and components.
{"Source-Url": "https://www.plexim.com/sites/default/files/tutorials/cscript_controller.pdf", "len_cl100k_base": 4856, "olmocr-version": "0.1.48", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 25453, "total-output-tokens": 5478, "length": "2e12", "weborganizer": {"__label__adult": 0.0003840923309326172, "__label__art_design": 0.0003771781921386719, "__label__crime_law": 0.00022494792938232425, "__label__education_jobs": 0.0006990432739257812, "__label__entertainment": 0.00011223554611206056, "__label__fashion_beauty": 0.00015270709991455078, "__label__finance_business": 0.00021314620971679688, "__label__food_dining": 0.000400543212890625, "__label__games": 0.0010194778442382812, "__label__hardware": 0.01611328125, "__label__health": 0.0003757476806640625, "__label__history": 0.00020503997802734375, "__label__home_hobbies": 0.0002429485321044922, "__label__industrial": 0.0012655258178710938, "__label__literature": 0.0001175999641418457, "__label__politics": 0.00019752979278564453, "__label__religion": 0.0005650520324707031, "__label__science_tech": 0.063720703125, "__label__social_life": 6.407499313354492e-05, "__label__software": 0.022247314453125, "__label__software_dev": 0.89013671875, "__label__sports_fitness": 0.00036263465881347656, "__label__transportation": 0.0006146430969238281, "__label__travel": 0.00016927719116210938}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20764, 0.01907]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20764, 0.85805]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20764, 0.8351]], "google_gemma-3-12b-it_contains_pii": [[0, 209, false], [209, 3571, null], [3571, 5018, null], [5018, 7484, null], [7484, 9566, null], [9566, 12165, null], [12165, 14441, null], [14441, 17114, null], [17114, 20273, null], [20273, 20764, null], [20764, 20764, null]], "google_gemma-3-12b-it_is_public_document": [[0, 209, true], [209, 3571, null], [3571, 5018, null], [5018, 7484, null], [7484, 9566, null], [9566, 12165, null], [12165, 14441, null], [14441, 17114, null], [17114, 20273, null], [20273, 20764, null], [20764, 20764, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 20764, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20764, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20764, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20764, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 20764, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20764, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20764, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20764, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20764, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20764, null]], "pdf_page_numbers": [[0, 209, 1], [209, 3571, 2], [3571, 5018, 3], [5018, 7484, 4], [7484, 9566, 5], [9566, 12165, 6], [12165, 14441, 7], [14441, 17114, 8], [17114, 20273, 9], [20273, 20764, 10], [20764, 20764, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20764, 0.0]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
571c04e3ea1276e0bb818684b2b4aecbc638b3d4
Applying Input-Output Tree to the Implementation of a Rapid Prototyping Tool for Java Web Applications CHUN-FENG HSIAO1,2, CHIH-PING CHU 1Department of Computer Science and Engineering National Cheng Kung University Tainan, 701 Taiwan 2Department of Information Science and Technology Chia Nan University of Pharmacy and Science Tainan, 717 Taiwan A prototyping tool which facilitates the design of Java applications based on existing Java classes in the repository is developed and embedded into an eLearning platform. An input-output tree is proposed to graphically show the data flow from the input to output of a specific application. In the future, the possible direction of using such similar applications as the proposed tool will assist the programmer to assemble the reusable software components from the repository, in which the source codes are generated automatically by means of automatic programming. Keywords: automatic programming, CASE, input-output tree, rapid prototyping, software reuse 1. INTRODUCTION Learning platforms adopted at most universities conform to SCORM (Sharable Content Object Reference Model) [1, 2, 3], which is a collection of standards and specifications for eLearning. Over the past 10 years, many eLearning courses have been developed. The assets of these online courses, including web pages, image files, sound files, video files, FLASH objects, and Java applet files, are stored on web servers. A recent study by the authors [4] found that many simulations for eLearning are designed as special web services, and have been developed by expert programmers. In order to reduce the cost of developing web applications, existing assets and software components can be reused. Java and Flash are primarily used for developing web applications for eLearning. Therefore, it is desirable for an authoring tool to provide the reuse of Java and Flash code. Many programmers are familiar with software revision control systems such as CVS. The process of reusing software components to accomplish a specific function is similar to the assembly of Lego™ building blocks. Knowledge management systems or configuration systems are required for managing the products of the projects and facilitating software reuse. Although a learning platform can be regarded as a knowledge management system, SCORM-compliant learning platforms do not support software reuse. Many universities make learning platforms accessible 24 hours a day. This yields... the following questions. Is there any possibility to merge the learning platform with the function of software reuse? Can the learning platform automatically generate the prototype of Java applet by assembling the Java classes in the repository? Our previous study [4] showed that a learning platform can be merged with software reuse. Automatically generating prototypes of Java applets by assembling the Java classes in the repository will help form a community that shares and reuses software components via the learning platform. For another, the related department of computer science does not need to purchase software configuration system installed on a stand-alone server. Furthermore, the goal of power saving can be accomplished as well. In the present study, a prototyping tool, the Visual Online Prototyping Authoring Tool (VOPAT), which adopts input-output tree and facilitates the development of Java applets based on the existing Java classes in the repository, is proposed. For developers who lack coding experience, VOPAT helps them easily reuse Java applets to accelerate the development of their own Java applet. VOPAT generates a webpage based on Wikipedia, which helps rapidly generate a description of the project or application. VOPAT also offers the rapid prototyping of Java applets in a way similar to automatic programming to take advantage of reusable Java classes. The present study focuses on the development of VOPAT and its integration into a learning platform for automatically generating prototype Java applets, while the effect of the learning outcome is far beyond the topic since the users are likely to flexibly develop many kinds of Java applet prototypes through the VOPAT. To conveniently and easily prove the feasibility of JAVA class reuse, the range of the VOPAT is confined in the mathematical computation related domain in this study. The rest of this article is divided into four sections as follows: 1) The related literature and the relevant work are reviewed. 2) The methodology of the VOPAT is described. 3) Implementation and test results of the VOPAT are presented. 4) Some final conclusions and suggestions for future work are elaborated in the closing part. 2. RELATED WORKS This section briefly introduces software reuse, rapid prototyping, and RPC and CORBA. Then existing automatic programming systems are reviewed and compared to VOPAT. 2.1 Software Reuse Software reuse enables developers to leverage past accomplishments and facilitates significant improvements in software productivity and quality [5, 6]. Software developers derive private benefits from writing software and sharing their code, and collectively contribute to the development of software [7]. Such private benefits include enjoyment, fun, learning, reputation, and community membership [8, 9]. In this study, software reuse is center on VOPAT. Consequently, we will survey the user’s satisfaction of improvement in software productivity, the users’ willingness of sharing their code, and the inter- action among developers. 2.2 Rapid Prototyping Rapid prototyping is a design methodology that has been successfully used in software engineering. Tripp and Bichelmeyer [10] indicated that rapid prototyping is a viable model for instructional systems design in a computer-based instruction context. They concluded that there are striking similarities between software engineering and instructional systems design. Two of the five potential advantages of rapid prototyping they stated are listed below: 1) Prototyping can increase creativity through quicker user feedback. 2) Prototyping accelerates the development cycle. Shih [11] stated that many teachers at the secondary high school level might lack sufficient knowledge or skills to install or maintain a programming language in their own computers. For this reason, we introduce the VOPAT in this study that not only facilitates the reuse of the Java applet and but also generates the Java applet automatically based on the repository for developers lacking in the experience in coding. 2.3 RPC and CORBA RPC (Remote Procedure Call) and CORBA (Common Object Request Broker Architecture) are used to develop the object-oriented distributed applications. CORBA enables separate pieces of software written in different languages and running on different computers to work with each other like a single application or set of services [12]. The paper in [13] described that there exist two main disadvantages to this code development paradigm in RPC and CORBA. First, it increases the code development time and cost. Second, it limits the development of distributed applications. The code development in RPC and CORBA is troublesome, time-consuming and error-prone. 2.4 Automatic Programming In computer science, the term automatic programming identifies a type of computer programming in which some mechanism generates a computer program rather than have human programmers write the code [14]. Besides the generation of code from a wizard or template, IDEs (integrated development environment) such as Eclipse, Interface Builder and Microsoft Visual Studio can also generate and manipulate code to automate code refactorings that would require multiple (error prone) manual steps, thereby improving the developer’s productivity. The 15 automatic programming tools listed in [14] are either model-driven or template-based, and all of them are stand-alone software tools. Barstow [15] used the following informal definition of an automatic programming system, which implicates that an automatic programming system must be domain-specific. An automatic programming system allows a computationally naive user to describe problems using the natural terms and concepts of a domain with informality, imprecision and omission of details. An automatic programming system produces programs that run on real data to effect useful computations and that are reliable and efficient enough for In [16], Budinsky et al. presented a tool for generating design pattern code automatically from a small amount of user-supplied information. In addition, they also describe how the tool incorporates a hypertext rendition of Design Patterns to give designers an integrated on-line reference and development tool. Bassil and Barbar [17] indicated that modern computer programming languages are governed by complex syntactic rules. Unlike natural language, they require extensive manual work and a significant amount of learning and practicing for an individual to become skilled at and to write correct programs. Bassil and Barbar proposed a new programming language and an environment for writing computer applications based on source-code generation. It is mainly a template-driven automatic natural imperative programming language. In [18], Reformat et al. conducted an experiment on automatic programming using GP (Genetic Programming) algorithm for software clones. The experiment proved the possible usability of GP-based approach to automatic generation of clones. Kang et al. [19] investigated the representation of program for program reuse. They indicated that gene expression programming (GEP) [20] may have great significance and deep influences on the research of automatic programming in the future. Fertalj and Brcic [21] presented an application generator based on UML specification and on templates written in XML/XSL. The generator accomplished the preserved flexibility towards the target programming language by code generation through two transformations; first into an intermediate code and then into the code of a selected target language by the specific template. Only the templates for C# and MSSQL had been produced in the application. In the related works of automatic programming mentioned above, the user is required to download the specified software tool and install it on a PC, while the VOPAT, which works as the web service just like cloud computing, is working on the Internet. Furthermore, the VOPAT is not a source code generator but utilizes the Java classes in the repository as building blocks for assembling the Java applet prototype. 3. METHODOLOGY This section will introduce the system architecture and the scenarios of usage. The Visual Online Prototyping Authoring Tool, as illustrated in Fig. 1, consists of two function units, the reusable class assembly function unit and the webpage editor. 3.1 System Architecture Reusable class assembly function unit assembles the reusable Java classes. In order to assemble the software building blocks through the VOPAT, we need four basic components below. 1) Repository: it is used to store the Java class or API. 2) The Plug-in Repository Management System: it handles basic operations of the repository. More detailed description can be found in our previous study [4]. 3) The guidance document of the Java class or API: it offers the usage guide of the Java class and it follows the format of Java API documentation defined in [22]. In order to facilitate the search of reusable Java class, the source code should properly add the comments on the basis of the format in Table 1, in which this format of comment is defined by Sun Microsystems [22]. Then the PRMS will generate the corresponding Java doc as shown in Fig. 2. A well-formed Java doc is the critical factor for the successful operation of the reusable class assembly function unit. 4) The transformation XML file: it describes the user requirements of the Java applet which include the number of user interface template (input and output), the specification of input and output, the name of the label text, and the description of the process. The XML file serves as the input of the reusable class assembly function unit and its format is defined in Table 2. Because parsing the natural language sentences into an input-output tree is difficult to work in general if the requirements are getting more complicated, the user instead needs to type the separated keywords by semicolon representing the description of the process into the system user interface. In this way, it will be simpler for the VOPAT to construct the input-output tree and search for the reusable JAVA class. ![Fig. 1. The system architecture of VOPAT](image) **Table 1. The format of comments using for VOPAT.** <p>| | |</p> <table> <thead> <tr> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td>1.</td> <td>/**</td> </tr> <tr> <td>2.</td> <td>* @author Hsiao</td> </tr> <tr> <td>3.</td> <td>* @version 1.0</td> </tr> <tr> <td>4.</td> <td>*/</td> </tr> <tr> <td>5.</td> <td>public class Number {</td> </tr> </tbody> </table> 6. /** Briefly describe the operation of the method below. 7. */<b style='color:blue;'> Test if the number is even or not</b> 8. * 9. *@param number; an integer 10. *@return boolean; is the number even? 11. */ 12. public boolean getEven(int number){ 13. if (number%2 ==0) return true; 14. else return false; 15. } 3.2 The Input-Output tree The reusable class assembly function unit will retrieve the keywords from the transformation XML file and then try to denote the program as an input-output tree illustrated in Fig. 3. The left side of Fig. 3 is the standard expression tree [23, 24] which behaves the deficiency of the ambiguous data type of each node. Without modification, the VOPAT will face the trouble to tell the data type of the node "number" which will cause the failure of finding the right method in the reusable class. Therefore, we propose the input-output tree as shown in the right side of Fig. 3 which will clearly show the data flow from input to output of a specific application with single GUI. In the input-output tree, the root node of the tree represents the output of the system, such as a parameter of the basic data type, an object, an array, and etc. The node in the 2*n level (where n>0) of the tree represents the processing method or an operator (+,-,*,/, OR, AND, NOT…) or a subsystem. The node in the (2*n-1) level (where n>1) of the tree represents the operand and is equivalent to the input parameter of a specific method. It reveals that from the modified expression tree at most three methods within some reusable classes in the repository will be required to assemble the user application in Table 2. For instance, the method "is prime" will need the input parameter of type integer to generate the output parameter of type boolean. Fig. 4 shows that the system user interface is composed of one or many input-output trees. The leaf node of the input-output tree represents the input parameter which is offered by the user or generated by the system (example: constant). We can regard the input-output tree as the representation of data flow from bottom (input) to top (output), which might be helpful to automatic programming in software reuse. 3.3 The set of domain characteristics for the VOPAT It is sometimes difficult to judge if a potentially reusable component can be put into practice in a particular situation. To make it clarified, it is necessary to define a set of domain characteristics that are shared by all software within a domain [25]. Therefore, in this study, we define the set of characteristics in Table 3, which is used to identify the reusable component. After setting the adequate number of weight of each characteristic, the VOPAT will calculate the weighted number of the reusable component. The second unit of VOPAT, the webpage editor, provides the user to edit the text of the application. In the unit, we use the open source software, the CKEditor [26], to achieve the goal of visual online and the system will first get access to the Wikipedia according to the keyword of the application as the original text, which the user can edit and refine later. <table> <thead> <tr> <th>Characteristics</th> <th>Explanation</th> </tr> </thead> <tbody> <tr> <td>Application domain</td> <td>Does the reusable Java class belong to the application domain?</td> </tr> <tr> <td>Method conformance</td> <td>Does the method in the reusable Java class conform to the specification of input and output?</td> </tr> <tr> <td>Process conformance</td> <td>Does the method in the reusable Java class conform to the required process?</td> </tr> </tbody> </table> 3.4 The scenarios of using the VOPAT There are two ways in making the Java applet project by the VOPAT and each of them is described respectively below. 1) Reusing the existing Java applet The user may follow the process below to quickly reuse the existing Java applet and compose the main text of the project which is based on the Wikipedia. In phase 1, the user looks up the repository for suitable applet according to special requirement of specified learning design, which is like searching for a suitable Lego block in order to accomplish a special Lego model. In phase 2, the user edits the main text of the project. In phase 3, the user completes the webpage and saves it in a local drive. 2) Generating the Java applet prototype As the user is unsatisfied with the style of the existing Java applet, he/she may follow the steps below to generate a new Java applet prototype. In phase 1, the user chooses a suitable one of the user interface templates offered by the VOPAT. It is similar to choosing the style of blog. In phase 2, the user defines the information of the input and the output which contains the data type and the name of the label text. In phase 3, the user inputs a simple process of the project. In phase 4, the VOPAT will create the transformation XML file as the input of the reusable class assembly function unit. Then the assembly function unit will search for the Plug-in information repository and try to assemble the reusable classes. In phase 5, the VOPAT generates the specified Java applet prototype embedded in a webpage. 4. RESULTS AND CASE STUDIES The user interface depicted in Fig. 5 is the entry of the rapid prototyping tool, which first lists all the reusable Java applets in the repository. Then the developer types the keyword to search precisely for what he/she demands and the result is shown in Figure 6. The two buttons, "download class" (to download the class for reuse and extending) and "See API" (to view the Java API documentation for the Java applet class), support the advanced user to add functionality to existing Java applets. More discussion about them is elaborated in our recent study in [4]. The button "Class Information Template" offers the entry into the process of rapid prototyping. A new pop-up window, shown in Fig. 7, will be generated when we press this button. ![Fig. 5. The entry of the VOPAT](image) ![Fig. 6. Search result](image) In Fig. 7, the image (a) is the top half of the template and the image (b) is the bottom half. We can preview the existing Java applet and edit the learning content through the template. ![Fig. 7. (a) – (b) show the template for reusing the existing Java applet](image) Figure 8 shows the scenario in which the user attempts to create a template-based application. VOPAT adopts the architecture of Model–View–Controller (MVC) and each image stands for each one of the three user interface templates. In order to prove the feasibility of reusing software components as the assembly of Lego™ building blocks, four test cases were prepared to test the VOPAT. Theses test cases offer the verification and validation of VOPAT and the results are shown as follow. ![User interface templates for generating the Java applet prototype] When the user selects the "Template 2" in Figure 8, the user interface of the Applet Prototype Factory will appear on the screen as shown in Figure 9. In test case 1, the user wants to calculate the area of a sector which the result will be rounded to a closest integer. The user interface will offer the input of radius and degree of a certain sector. After typing all the information in the Applet Prototype Factory and pressing the button named "Generate the prototype", VOPAT will transform the requirement into a XML file. When the user chooses to know the detail information of the VOPAT, the system will present the information (package, method, return parameter, input parameter, the description of the method, and the weight of this method) as shown in Figure 10(a) and 10(b). Figure 11 (a) shows the simple information of the reusable Java class. When the user clicks the hyperlink of "Click to see the applet in a new window", an applet prototype will later appear in a new pop-up window as shown in Figure 11(b). Eventually, the user can use the applet to calculate the area of a sector. Fig. 9. The Applet Prototype Factory using template 2 (test case 1) ************Receommended Class 1************ package edu.math; import java.math.BigDecimal; Description: Calculates the area of a sector when the radius and the degree are given. ************Receommended Class 2************ package edu.math; import java.math.BigDecimal; Description: Calculates the area of a sector when the radius and the degree are given. ************Receommended Class 3************ package edu.math; import java.math.BigDecimal; Description: Calculates the length of a sector when the radius and the degree are given. ************Receommended Class 4************ package edu.math; import java.math.BigDecimal; Description: Calculates the length of a sector when the radius and the degree are given. Fig. 10. (a)-(b) show the detail information of the reusable Java class From the test case 1, it is proved that the VOPAT successfully assembles the reusable Java classes with the Java classes offered by JAVA JDK. Figure 12 shows the test case 2 in which the user wants to calculate the surface area of a cylinder \(2\pi r^2 + 2\pi rh\). The height of the cylinder is defined as 10 and therefore the input of radius is essential in this case. Figure 13(a) shows the simple information of the reusable Java classes and Figure 13(b) shows the input-output tree of this test case which is internally used in the VOPAT. Fig. 12. The Applet Prototype Factory using template 3 (test case 2) Fig. 13. (a) shows the simple information of the reusable Java class; (b) shows the input-output tree of this test case From the result of test case 2 shown in Figure 14(a-b), it is again proved the proper work of the VOPAT and this test case will represent the more complicated case in this study. Then the Figure 15 shows the test case 3 in which the user needs to input two integers at random and the output will show the ratio of the maximum and the minimum. Figure 16(a) shows the simple information and Figure 16(b) shows the result. The Figure 17 shows the test case 4 in which the user wants to calculate the third power of an integer. Figure 18(a) shows the error message in this case. When the error message occurs, the user knows that the suitable Java class which can meet the requirement is not existent in the repository. Then the user may read the guide shown in Figure 18(b) to modify the requirement. VOPAT is not an automatic source code generator and as the error message emerges repeatedly, the user should code by self or call someone good at programming for help. Fig. 16. (a) shows the simple information of the reusable Java class; (b) shows the result of the applet (test case 3). Fig. 17. The Applet Prototype Factory using template 1 (test case 4) The problem occurs when the assembly function unit deals with a reusable Java class that is well encapsulated. An example of Java source code in the format of standard encapsulation is given in Table 4. For instance, the unit can generalize that the method "is odd" in Fig. 3 requires an integer input and that the method "isOdd" in Table 4 does not require any input parameters. According to the set of domain characteristics in Table 3, the assembly function unit will not select the class "Number" with the method "isOdd" as a candidate component at this stage. In order to solve this problem, we adopt the composition of two methods to form a subsystem. Therefore, it will instead take the composition of the two methods (public void setNumber(int number) and public boolean isOdd()) to complete the method "is odd" in Fig. 3. Fig. 18. (a) shows the error message of test case 4; (b) shows the guide of the Applet Factory Table 4. The standard encapsulation of JAVA class ```java public class Number { private int number; // information hiding public void setNumber(int number) { this.number = number; } public int getNumber() { return this.number; } public boolean isOdd() { if (number % 2 == 0) return false; else return true; } } ``` 5. CONCLUSION From the four test cases and three templates, it is proved that the proposed prototyping tool VOPAT allows developers to reuse and assemble existing Java classes in the domain of mathematical computation. Although VOPAT is applicable to only small-scale applications and three templates in this moment, it can be used as a service of the cloud computing to develop small Java-applet-based projects and used as a knowledge management system of reusable Java classes. VOPAT is different from the approaches such as RPC and CORBA, because the operation of VOPAT is similar to automatic programming and it aims to help the users in school experience the benefit of software reuse. The input-output tree proposed in this paper plays an important role in assembling the methods of reusable Java classes. It clearly presents the data flow from input to out- put in a specific user interface and let the VOPAT choose suitable methods in reusable Java class repository to fulfill the requirement of the user interface. Though the set of weighted number of domain characteristic is a subjective judge and may cause errors in some conditions, it is essential for the VOPAT to assemble Java building blocks. The study of suitable weighted number of domain characteristic will be conducted in the future. In order to enhance the ability of assembling complicated reusable Java classes, some approaches of program representation are required, such as graph-based individual structures [27] or grammatical evolution [28]. In the future, VOPAT may allow the drawing of UML [29] diagrams online, giving users a convenient method of describing their applications. Besides, more templates can be developed and allow VOPAT to apply to other domains such as physics, chemistry, and so on. On the other hand, the input-output tree may offer the software developers more obviously information in the design of the user interface. Since VOPAT is independent of eLearning platforms and eLearning standards, it is easy to deploy for supporting Java class or API repositories on eLearning platforms and for providing a software configuration system infrastructure that is embedded in the eLearning platform. In conclusion, the possible direction of using such similar application as the VOPAT will assist the programmer to assemble the reusable software components from the repository, in which the source codes are generated automatically by means of automatic programming. REFERENCES Chun-Feng Hsiao (蕭淳豐) received the B.S. degree in Department of Computer Science and Information Engineering from National Chiao-Tung University, Taiwan, in June 1993 and the M.S. degree in Graduate Institute of Information and Computer Education from National Kaohsiung Normal University, Taiwan, in July 2001. Since September 2005, he has been studying towards the Ph.D. degree and currently is a doctoral candidate in the Department of Computer Science and Information Engineering, National Cheng Kung University, Taiwan and a lecturer in the Department of Information Science and Technology, Chia Nan University of Pharmacy and Science, Taiwan. His research interests include e-learning and software engineering. Chih-Ping Chu (朱治平) received a B.S. degree in Agricultural Chemistry from National Chung Hsing University, Taiwan, an M.S. degree in Computer Science from the University of California, Riverside, and a Ph.D. degree in Computer Science from Louisiana State University. He is currently a professor in the Department of Computer Science and Information Engineering of National Cheng Kung University, Taiwan, R.O.C. His research interests include parallelizing compilers, parallel computing, parallel processing, internet computing, DNA computing, and software engineering.
{"Source-Url": "https://www.iis.sinica.edu.tw/page/jise/FILE/AcceptedList/110/110455-AIT.pdf", "len_cl100k_base": 6039, "olmocr-version": "0.1.48", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 34915, "total-output-tokens": 8321, "length": "2e12", "weborganizer": {"__label__adult": 0.0003535747528076172, "__label__art_design": 0.0003981590270996094, "__label__crime_law": 0.00030303001403808594, "__label__education_jobs": 0.005859375, "__label__entertainment": 6.783008575439453e-05, "__label__fashion_beauty": 0.0001735687255859375, "__label__finance_business": 0.0002040863037109375, "__label__food_dining": 0.0003273487091064453, "__label__games": 0.0005030632019042969, "__label__hardware": 0.0008029937744140625, "__label__health": 0.0004067420959472656, "__label__history": 0.00024580955505371094, "__label__home_hobbies": 0.00012010335922241212, "__label__industrial": 0.00035762786865234375, "__label__literature": 0.00029087066650390625, "__label__politics": 0.0002300739288330078, "__label__religion": 0.0005178451538085938, "__label__science_tech": 0.00952911376953125, "__label__social_life": 0.00014460086822509766, "__label__software": 0.00597381591796875, "__label__software_dev": 0.97216796875, "__label__sports_fitness": 0.00026679039001464844, "__label__transportation": 0.0005736351013183594, "__label__travel": 0.00020503997802734375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33415, 0.03395]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33415, 0.7687]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33415, 0.85166]], "google_gemma-3-12b-it_contains_pii": [[0, 2481, false], [2481, 5515, null], [5515, 8456, null], [8456, 11405, null], [11405, 12929, null], [12929, 13251, null], [13251, 15125, null], [15125, 15785, null], [15785, 18474, null], [18474, 19645, null], [19645, 20963, null], [20963, 22377, null], [22377, 22567, null], [22567, 23534, null], [23534, 24556, null], [24556, 25898, null], [25898, 28988, null], [28988, 32039, null], [32039, 33415, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2481, true], [2481, 5515, null], [5515, 8456, null], [8456, 11405, null], [11405, 12929, null], [12929, 13251, null], [13251, 15125, null], [15125, 15785, null], [15785, 18474, null], [18474, 19645, null], [19645, 20963, null], [20963, 22377, null], [22377, 22567, null], [22567, 23534, null], [23534, 24556, null], [24556, 25898, null], [25898, 28988, null], [28988, 32039, null], [32039, 33415, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33415, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33415, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33415, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33415, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33415, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33415, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33415, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33415, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33415, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33415, null]], "pdf_page_numbers": [[0, 2481, 1], [2481, 5515, 2], [5515, 8456, 3], [8456, 11405, 4], [11405, 12929, 5], [12929, 13251, 6], [13251, 15125, 7], [15125, 15785, 8], [15785, 18474, 9], [18474, 19645, 10], [19645, 20963, 11], [20963, 22377, 12], [22377, 22567, 13], [22567, 23534, 14], [23534, 24556, 15], [24556, 25898, 16], [25898, 28988, 17], [28988, 32039, 18], [32039, 33415, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33415, 0.06316]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
036cec062d1f4a0f0ba92897ac746c58a6b1ff3c
Web 2.0 artifacts in SME-networks – A qualitative approach towards an integrative conceptualization considering organizational and technical perspectives Nadine Blinn¹, Nadine Lindermann², Katrin Fäcks¹, Markus Nüttgens¹ ¹University of Hamburg School of Business, Economics and Social Sciences Institute for Information Systems Von-Melle-Park 9, D-20146 Hamburg nadine.blinn@wiso.uni-hamburg.de katrin.faecks@wiso.uni-hamburg.de markus.nuettgens@wiso.uni-hamburg.de ²University of Koblenz-Landau Computer Science Faculty Institute for Management Universitaetsstrasse 1, D-56070 Koblenz nadine.lindermann@uni-koblenz.de Abstract: Small and medium sized enterprises (SMEs) face new challenges in a complex and dynamic competitive environment. According to this, SMEs need to cooperate due to their restricted resources and limited capacities. At this, Enterprise 2.0 is seen as a supporting approach. As there is a lack of academic publications concerning recommendations for the application of Web 2.0 artifacts in SME-networks, we aim at bridging this gap with the following paper by suggesting a conceptual base following the design science approach. Based on technical and organizational requirements resulting from interviews with representatives of SMEs participating in a regional SME-network, we transfer technical and organizational requirements in a prototypic concept. This developed concept provides a basis for a field test to evaluate the concept and for further research. 1 Introduction Since small- and medium sized enterprises (SMEs) represent 99 % of all European Enterprises, they are of high social and economic importance within Europe [EC03]. With respect to their restricted resources and limited capacity for innovation, SMEs need to cooperate to compete with new challenges in a complex and dynamic competitive environment. Thereby cooperation enables SMEs to access and operate on an extended resource base [HP96; SC07]. From the technical point of view, Web 2.0 tools as software-oriented Web 2.0 artifacts\(^1\) are seen as adequate tools for SMEs to increase productivity as well as proximity to the market [Wy08]. The advantages of Web 2.0 tools in internal use of enterprises are beyond dispute. However, the implementation of Web 2.0 artifacts in SMEs is considered useful and necessary [Al09; Fa06] but expandable, as the implementation of Web 2.0 artifacts remains exceptional [CB07]. From an organizational point of view, SMEs are often aligned in a patriarchic way. Thus, entrepreneurial initiatives are often driven by one or two individuals [Sc97]. Consequently, generally not all employees can participate in the development of new ideas [TH96]. At present, results considering a concept to support the challenges of SME-networks by using the Web 2.0 approach considering technical and organizational perspectives, are not published in the information system research landscape. This paper aims at bridging this gap by presenting actual results of a qualitative research approach. Following the design science approach [He04] our work aims at depicting Web 2.0 tools that have to be implemented within a network of SMEs by means of an incremental qualitative approach considering organizational and technical perspectives. The paper is structured as followed: Chapter 2 outlines general characteristics of SMEs and Web 2.0 as well as how SMEs are using Web 2.0 in practice. Thereby we emphasize the need for an integrative consideration of organizational and technical aspects in the software development process of Web 2.0. Chapter 3 presents results of expert interviews conducted with SME-managers of the “WirtschaftsForum Neuwied e.V.”, a SME-network in consideration. Thereby we gathered organizational and technical requirements for the development of the Web 2.0 platform. Based on these results, recommendations for an incremental software development process considering organizational and technical requirements towards an integrative Web 2.0 conceptualization are given. Chapter 4 concludes with further research questions and next steps within the project. 2 Background: Web 2.0 in the Context of SMEs Web 2.0 is a term which has a high popularity and is widely disputed within the literature. It provides new possibilities for companies to organize their business in an innovative way. This chapter aims to introduce Web 2.0 in an organizational context. Therefore it outlines results of actual studies that analyze how enterprises, especially SMEs are using Web 2.0 in practice and which challenges they are facing by using it. On the basis of these findings we derive an incremental research approach to develop a Web 2.0 platform within a specific SME-network. \(^1\) In the following the term “Web 2.0 artifact” comprises Web 2.0 applications (e.g. blogs) also named Web 2.0 tools as well as Web 2.0 concepts (e.g. tagging). 2.1 Web 2.0 and Enterprise 2.0 Web 2.0 is a phenomenon representing a second-generation approach to the World Wide Web (WWW) which is different from the previous way of passive content consumption by the users. The term was first introduced by O’REILLY and comprises a “business revolution in the computer industry caused by the move to the internet as platform” [OR06] which allows users to participate in the process of creating and sharing content. Thus internet content of Web 2.0 is not just to be read, listened to or observed. Web 2.0 is created to actively communicate and participate on the Internet [McA06; OR05]. These concepts are supported by different Web 2.0 tools often also named as social software [SGL06]. Web 2.0 tools as software-oriented Web 2.0 artifacts are web-based applications afforded by upcoming so called Web 2.0 technologies\(^2\) [Al07]. Moreover, widely and 24/7 available broadband internet access and decreasing internet costs supported the development of Web 2.0 artifacts. To categorize the Web 2.0 artifacts, a framework considering the different functions of the tools is reasonable. According to PLEIL, Web 2.0 functions are [Pl06]: Authoring, Sharing, Collaboration, Networking and Scoring. Figure 1 gives a brief overview on current Web 2.0 tools and principles, a brief description of the artifact and the according functionalities. <table> <thead> <tr> <th>Artifact</th> <th>Description</th> <th>Function(s)</th> </tr> </thead> <tbody> <tr> <td>Weblog</td> <td>Web-based communication medium, that is determined by the following characteristics:</td> <td></td> </tr> <tr> <td></td> <td>• chronology (time stamp for entries)</td> <td></td> </tr> <tr> <td></td> <td>• actuality (reference to actual events and subjects)</td> <td></td> </tr> <tr> <td></td> <td>• interaction (comment-function for readers)</td> <td></td> </tr> <tr> <td></td> <td>• internet-relation (links to continuative information, links to other blogs, “trackbacks”)</td> <td></td> </tr> <tr> <td>Wiki</td> <td>Collection of websites, that can be edited by every user</td> <td>Authoring, Sharing, Collaboration</td> </tr> <tr> <td>Social Tagging</td> <td>Collective indexing or tagging of existing context to ease the indexing of content</td> <td>Sharing, Scoring</td> </tr> <tr> <td>Social Networking</td> <td>Maintenance and building of contacts</td> <td>Networking</td> </tr> <tr> <td>Podcast</td> <td>Broadcast or broadcast series of audio or video content</td> <td>Sharing</td> </tr> </tbody> </table> Figure 1: Web 2.0 artifacts (own creation referring to [KE06] und [Du07]) \(^2\) Examples for Web 2.0 technologies are: Asynchronous JavaScript and XML (AJAX), Really Simple Syndication (RSS) or ATOM Syndication Format (ASF) [AL07]. 275 Applying Web 2.0 technology in an organizational context is referred to the term of Enterprise 2.0. The term was coined by McAfee to focus on those Web 2.0 platforms that are used “within companies, or between companies and their partners and customers” [McA06]. While Web 2.0 is stated to be a business revolution and a milestone in the WWW it is also criticized just to be hype or a “dotcom bubble”. Thus, the section below sketches major results of actual studies analyzing the question of how companies are using Web 2.0 technologies in practice. 2.2 Enterprise 2.0 in Practice: State of the Art This section summarizes main results of actual studies which are considering the state of the art of Enterprise 2.0 in practice [CB07; McK08; TEI07]: In a nutshell there is a trend that Web 2.0 is becoming familiar within the companies and that all companies plan to spend more on it. Primarily large companies and enterprises that are deriving business value from Web 2.0 are extensively using it. Thereby Web 2.0 tools are integrated into business activities both outside the company to improve customer services and relations and inside the company to optimize internal information and knowledge management. However, not all companies are using Web 2.0. While some companies are dissatisfied with existing Web 2.0 tools and abandon the use of them, for some companies the term Web 2.0 is not known and its benefits are not clear: Web 2.0 comprises a multitude of technologies, applications and services that provide different functionalities and services that are hardly to differentiate. As no common definition of Web 2.0 exists, just a few people really know what it means. Managers do not understand the economical benefit that Web 2.0 can bring to their company and do not encourage the use of it within the enterprise. Besides, some companies suspect a lack of security by using Web 2.0. While SMEs are companies with less than 250 employees [EC03] the section below states further challenges of applying Web 2.0 in SMEs’ practice. 2.3 The Challenge of Applying Web 2.0 in SMEs Even though companies perceive an increasing benefit by using Web 2.0, its adoption is affiliated with primarily non-technical barriers and challenges. Applying Web 2.0 in SMEs thus requires considering the specific characteristics of SMEs to gain an understanding of how Web 2.0 is actually used in SMEs-practice. In general, the SME sector is very dynamic. While many new enterprises start up every year only forty percent of them survive for ten years [LP05]. This is caused by the specific management structure of these companies: SMEs are considerably influenced by the personality of the company’s owners and their attitude to do business [BG06; LP05]: A real small firm has two arms, two legs and a giant ego” [Bu01]. The strategic horizon tends to be short with focus on a survival strategy and a reactive decision style due to limited resources [LP05]. Thus, planning and implementing Information Technology (IT) tends to take a short-term perspective. IT is used to manage day-to-day operations rather than to support management activities. As SMEs mostly have no IT department or expertise, the SME’s owner is the only person with authority and (limited) knowledge to identify IT-opportunities and to adopt them. Implementing IT often occurs in an ad hoc fashion and highly depends on the owner’s personality, experience and skills [LP05; SC07]. Given this context, the adoption of Web 2.0 in SMEs practice differs in some points from the study results outlined previously. While an intensive SMEs’ usage of the Internet can be observed, the utilization of Web 2.0 remains exception. Internet is mainly used for e-mail communication with customers and suppliers as well as collecting information. However there is an increasing use of complex online applications for customer service and purchase. In the next two years rising internet activities for customer communication are expected. Contrary to this Web 2.0 has no business relevance for some SMEs. Although they perceive improvements in customer relations or an optimization in gathering information, SMEs consider the potential of success of Web 2.0 with skepticism. A SMEs’ minority believe that Web 2.0 will impact their business since they are not able to evaluate its potential. Additionally a SMEs’ majority perceives risks by using Web 2.0 within their company (e.g. legal risks, risks of abuse) [DS08a; DS08b; ECH08; SCN08]. 3 A Concept for Cooperation Support of SMEs by Web 2.0 artifacts Within the project KMU 2.03 (SME 2.0) we could observe that SMEs perceive potentials by using Web 2.0 technology in a cross-organizational context. Thereby Web 2.0 provides possibilities to meet the needs for efficient collaboration as well as to gain economical benefit from it. In this regard, we introduce the term SME 2.0 to focus on Web 2.0 applications that are targeted at the necessities of SME-networks [vK08]. Thereby SME 2.0 “can’t just be about a wiki here, a blog there forever” [Ho07] it rather has to be embedded in the specific context of the particular SME-network [KR07]. 3.1 Use Case and Research Design The research project KMU 2.0 explores new management strategies for collaboration in SME-networks enabled by Web 2.0 applications and referring to innovative and cooperative solutions for daily work life problems (e.g. worker’s health protection or work-life balance issues). This comprises an analysis of concepts and models of self-organization and information technology (IT) in the context of Web 2.0, assuming - An employee’s confidence in using Web 2.0 applications in private life and thus a motivation to participate on a Web 2.0 platform in work life. - A high potential for creativity and innovation offered by heterogeneous groups. 3KMU 2.0 is funded by the German Federal Ministry of Education and Research (BMBF). For further information see www.kmu20.net. Given this context, we examine the capability of Web 2.0 applications to integrate employees from different SMEs participating in a cross-organizational network in order to profit from their collaborative creativity. The project raises the question whether the use of specific Web 2.0 applications foster the exchange of creativity and innovative ideas within a network of SMEs. Thereby we focus on the generation of new forms of innovation processes among the cooperating participants enabled by Web 2.0. This requires an incremental research approach gathering organizational and technical requirements for Web 2.0-based cooperation in order to develop and implement a Web 2.0 platform within a network of SMEs. Figure 2 shows the dependencies of the different research aspects and perspectives. ![SME 2.0: Organizational/Technical Requirements](image) The project is based on field research within a specific network, the “WirtschaftsForum Neuwied e.V.”. The “WirtschaftsForum Neuwied e.V.” was founded in 2002 and is a regional network of SMEs in the north of Rhineland-Palatinate, Germany. It consists of roughly 100 SMEs primarily from the industry and business sector employing about 8,000 individuals. With regard to its members who vary in enterprise sizes, represent different branches and offer diverse products and services, the “WirtschaftsForum Neuwied e.V.” is heterogeneous in structure. It thus focuses on non-competitive activities (e.g. daily work life problems) and aims at fostering knowledge transfer between its members and enhancing collaboration and business relations. To gather first requirements for the development of a Web 2.0 platform, which will be implemented into the “WirtschaftsForum Neuwied e.V.”, explorative interviews have been conducted with six executives of the cooperating SMEs. These companies represent the six project’s value partner who act as lead users, test the Web 2.0 platform and distribute it among the members. With regard to the results of chapter 2 the interviews were directed at collecting organizational and technical requirements for the development of a Web 2.0 platform that meets the specific needs of cooperating SMEs. Therefore the interviews provide general information about the SMEs and their collaboration with the “WirtschaftsForum Neuwied e.V.” as well as requirements, benefits and objectives of using Web 2.0 in this context. 3.2 Results In total 83 requirements (partly with sub-requirements) were extracted from the interviews. Figure 3, presents selected requirements. The denotation of the columns is described in chapter 3.3. <table> <thead> <tr> <th>dimension</th> <th>requirement</th> <th>category</th> <th>focus</th> <th>aim (of the requirements)</th> <th>Web 2.0 functionalities</th> </tr> </thead> <tbody> <tr> <td><code>Who-is-doing-what</code></td> <td>Overview on structure of WirtschaftsForum members: Who is part of the WirtschaftsForum? Who are the particular contacts?</td> <td>A</td> <td>hybrid</td> <td>SME; individual</td> <td>X</td> </tr> <tr> <td></td> <td>Establishing contacts</td> <td>A</td> <td>hybrid</td> <td>individual</td> <td>X</td> </tr> <tr> <td></td> <td>Increasing involvement of staff in WirtschaftsForum-activities: WirtschaftsForum-platform for chief executive managers and staff</td> <td>0</td> <td>organizational</td> <td>SME</td> <td>X X X</td> </tr> <tr> <td></td> <td>Illustration of goals and utility of platform</td> <td>0</td> <td>organizational</td> <td>individual</td> <td>X X X X X</td> </tr> <tr> <td></td> <td>Web 2.0 platform for a closed area</td> <td>A</td> <td>hybrid</td> <td>SME; individual</td> <td>X</td> </tr> <tr> <td><code>Design</code></td> <td>Easy handling</td> <td>A</td> <td>hybrid</td> <td>individual</td> <td>X X X X X</td> </tr> <tr> <td></td> <td>Registration with few personal data (data efficiency)</td> <td>A</td> <td>technical</td> <td>individual</td> <td></td> </tr> <tr> <td><code>Data security</code></td> <td>Avoid circulating of untruth and false information (resilience of information)</td> <td>A</td> <td>hybrid</td> <td>SME; individual</td> <td>X X X X X</td> </tr> <tr> <td></td> <td>Differentiation of sensitive/ non-sensitive information</td> <td>B</td> <td>hybrid</td> <td>SME; individual</td> <td>X X</td> </tr> </tbody> </table> 3.3 Discussion In general, the interview results confirm the actual use of Web 2.0 in SMEs’ practice as outlined in chapter 2.3. However, we could observe that Web 2.0 is perceived as instrument to optimize cooperation within the “WirtschaftsForum Neuwied e.V.”. The companies’ expectations to join the network are not entirely met at present. All interviewees express a high need to obtain general information on the WirtschaftsForum members. As a general survey of the member structure, which comprises information about branches, business areas and services provided, is not available yet, the enterprises perceive a lack of possibilities to represent their company and to exchange services within the network. In this regard we decided to focus on the development of a closed Web 2.0 platform first that fulfills these needs and will be refined during our project. Further requirements highly depend on the company’s own strategy and thus have to be analyzed within the course of our project. While analyzing the interview content, we could identify five dimensions of requirements, which allowed us to structure the requirements according to: - **Who-is-doing-what**: The platform that gives a general overview about the member structure of the SME-network. - **Design**: Configuration, design and usability aspects. - **Data security**: Meeting the high security needs of SMEs. - **Extensions**: Options to extend the platform. ![Table: Selected Requirements](image) <table> <thead> <tr> <th>Dimension</th> <th>Requirement</th> <th>Category</th> <th>Focus</th> <th>Aim (of the requirements)</th> <th>Authoring</th> <th>Sharing</th> <th>Collaborate</th> <th>Network</th> <th>Scoring</th> </tr> </thead> <tbody> <tr> <td>“Enhancements”</td> <td>Information on topics concerning current problems, e.g. energy consulting</td> <td>B</td> <td>Hybrid</td> <td>SME; individual</td> <td>X</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td></td> <td>Platform for generating, accumulating and communicating ideas</td> <td>B</td> <td>Hybrid</td> <td>SME; individual</td> <td>X</td> <td>X</td> <td></td> <td></td> <td></td> </tr> <tr> <td>“Behavioural”</td> <td>No suppression of negative comments</td> <td>0</td> <td>Hybrid</td> <td>SME; individual</td> <td></td> <td></td> <td></td> <td></td> <td>X</td> </tr> <tr> <td></td> <td>Knowledge transfer</td> <td>0</td> <td>Hybrid</td> <td>Individual</td> <td>X</td> <td>X</td> <td></td> <td></td> <td></td> </tr> </tbody> </table> Figure 3: Selected Requirements • Behavioral: Aspects comprising rules and ethical code that constitute the overall behaviour of the platform participants. These requirements aim partly on technical aspects of the prototype (e.g. ease of use), partly on organizational aspects (e.g. gaining economical benefit) and partly on hybrid aspects. We identified hybrid aspects as organizational requirements, which can be supported by technology (e.g. initiation of contacts). By assigning the Web 2.0 functionalities (Authoring, Sharing, Collaboration, Networking, Scoring) to the particular requirements and dimensions, we could schematically identify the Web 2.0 tool that fulfils these requirements. The requirements can be categorized according to the aim of their use. Either they aim at supporting individual use, the SMEs’ use or both. Furthermore we categorized the requirements according to importance A (must have within the first prototype) and B (further implementation). As a result we could identify the relevant requirements for each iterative phase of our incremental development process. Requirements that cannot be realized by technical means are categorized by “0”. These requirements are thus important for the organizational management of the SME-network. Analyzing the dimensions, we summarized that a social network tool is the Web 2.0 tool fulfilling most of the A-requirements and providing the most technical possibilities to expand the platform (according to www.wer-kennt-wen.de). Thereby we follow the principle of spare use of applications implying that the use of different applications with same or similar functions is avoided. 3.4. Recommendations As most of the members of the “WirtschaftsForum Neuwied e.V.” are not familiar with Web 2.0 concepts or Web 2.0 tools, the academic project partners decided to conceptualize a prototypic Web 2.0 platform in an early stage of the project. This decision was made, so that the Forum members have a “playground” to try out and to learn the Web 2.0 concepts by using them. The information and requirements we obtained from the interviews showed that a prototype fulfilling all requirements at once is neither realizable nor reasonable. As most of the interviewed persons are not common with Web 2.0 concepts or Web 2.0 tools, they probably change their requirements during testing the prototype and identify more requirements during the testing phase. By analyzing the interview recordings we could identify three groups of requirements: technical requirements, organizational requirements and hybrid requirements concerning inseparable technical and organizational perspectives. Hence, to transfer these requirements into an integrated conceptualization considering all groups of requirements, an iterative proceeding is necessary. In such a manner, Web 2.0 artifacts can be implemented in sustainable way into the SME-network. Towards an integrated conceptualization, we recommend the following steps: 1. Requirements survey: information gathering by structured interviews to obtain first user requirements, extracting the requirements of the interviews by analyzing the quintessence 2. Classification of the requirements: To structure the requirements, we recommend several dimensions (cp. Figure 3), to classify the requirements. The recommended dimensions are: A) Main content requirements (in the given case “Who-is-doing-what”) B) Design C) Data security D) Extensions E) Behavioral. After having allocated a requirement, we recommend to identify the associated Web 2.0 functionalities as well as the requirement group (technical, organizational, hybrid). The web 2.0 functionalities provide a basis for prioritizing the technical requirements: 3. Prioritization of the (technical) requirements to obtain a first set of requirements for the first prototype (A: first prototype, B: further implementation). 4. Implementation of A-requirements in a first prototype according to the identified Web 2.0 tool. 5. Train the users for basic functionalities. Thereby present the economical benefits the companies have by participating on the platform 6. Testing the prototype in a two-tier procedure: First, the lead users (in our case: a heterogeneous group of 6 so called-value partners) test the prototype. Then, the entire Forum will test the prototype. Accompanying, the lead users act as opinion formers 7. Requirements survey: In a second round, further requirements are surveyed, that result from the testing stage. 8. Implementation of B-requirements and after-testing requirements 9. Testing the extended prototype and monitoring of the user behaviour (e.g. clicking paths). With this set of recommendations we aim at suggesting a sustainable concept to implement Web 2.0 in a SME-network. The concept is going to be evaluated in cooperation with “WirtschaftsForum Neuwied e.V.”. 4 Summary and Outlook Small and medium sized enterprises (SMEs) face new challenges in a complex and dynamic competitive environment. To compete with these challenges, SMEs need to cooperate due to their restricted resources and limited capacities. Enterprise 2.0 is seen as an approach to solve the current problems that SMEs have to solve. As there is a lack of academic publications concerning recommendations for the application of Web 2.0 artefacts in SME-networks, we presented a conceptual base following the design science approach. The approach bases on technical and organizational requirements resulting from interviews with representatives of SMEs participating in a regional SME-network. With the aid of several analyzing dimensions, we identified technical, organizational as well as hybrid requirements and transferred them in a prototypic iterative concept. We will apply this concept and evaluate it in cooperation with the “WirtschaftsForum Neuwied e.V.”. This leads us to further research questions: can Web 2.0 newbies in the SMEs handle the prototype? Is sustain “learning” of Web 2.0 artifacts possible? How do individuals accept or decline the Web 2.0 artifacts? Do the users apply the prototypic Web 2.0 platform to solve their work life problems? Is this concept unchanged portable to other SME-networks? After having implemented the first prototype, the next step is to train the users and evaluate the acceptance of the prototype. Furthermore, according to the recommended concept, further requirements are going to be surveyed. **Literature** [CB07] CoreMedia; Berlecon Research: Enterprise 2.0 in Deutschland – Verbreitung, Chancen und Herausforderungen. A Study on behalf of CoreMedia conducted by Berlecon Research, November 2007.
{"Source-Url": "http://subs.emis.de/LNI/Proceedings/Proceedings150/262.pdf", "len_cl100k_base": 6102, "olmocr-version": "0.1.49", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 25916, "total-output-tokens": 7993, "length": "2e12", "weborganizer": {"__label__adult": 0.0005154609680175781, "__label__art_design": 0.002178192138671875, "__label__crime_law": 0.0007534027099609375, "__label__education_jobs": 0.01416015625, "__label__entertainment": 0.00042319297790527344, "__label__fashion_beauty": 0.0004012584686279297, "__label__finance_business": 0.12103271484375, "__label__food_dining": 0.0006814002990722656, "__label__games": 0.0012617111206054688, "__label__hardware": 0.002216339111328125, "__label__health": 0.0008358955383300781, "__label__history": 0.00104522705078125, "__label__home_hobbies": 0.0003905296325683594, "__label__industrial": 0.001255035400390625, "__label__literature": 0.001071929931640625, "__label__politics": 0.001094818115234375, "__label__religion": 0.0006394386291503906, "__label__science_tech": 0.1412353515625, "__label__social_life": 0.0005049705505371094, "__label__software": 0.1878662109375, "__label__software_dev": 0.5185546875, "__label__sports_fitness": 0.0003199577331542969, "__label__transportation": 0.0009398460388183594, "__label__travel": 0.0005521774291992188}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32631, 0.0409]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32631, 0.18156]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32631, 0.85208]], "google_gemma-3-12b-it_contains_pii": [[0, 1951, false], [1951, 4889, null], [4889, 7735, null], [7735, 10688, null], [10688, 13691, null], [13691, 16095, null], [16095, 18373, null], [18373, 20540, null], [20540, 23671, null], [23671, 25925, null], [25925, 29201, null], [29201, 32631, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1951, true], [1951, 4889, null], [4889, 7735, null], [7735, 10688, null], [10688, 13691, null], [13691, 16095, null], [16095, 18373, null], [18373, 20540, null], [20540, 23671, null], [23671, 25925, null], [25925, 29201, null], [29201, 32631, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32631, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32631, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32631, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32631, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32631, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32631, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32631, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32631, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32631, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32631, null]], "pdf_page_numbers": [[0, 1951, 1], [1951, 4889, 2], [4889, 7735, 3], [7735, 10688, 4], [10688, 13691, 5], [13691, 16095, 6], [16095, 18373, 7], [18373, 20540, 8], [20540, 23671, 9], [23671, 25925, 10], [25925, 29201, 11], [29201, 32631, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32631, 0.20896]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
6853d93d08370745518eb8127112535aff548807
# Contents 1 Installation and Versioning .................................................. 3 2 Vectors ........................................................................... 5 2.1 Thrust Namespace ....................................................... 7 2.2 Iterators and Static Dispatching .................................... 7 3 Algorithms .......................................................................... 9 3.1 Transformations ......................................................... 9 3.2 Reductions .................................................................. 11 3.3 Prefix-Sums ............................................................... 13 3.4 Reordering ................................................................. 14 3.5 Sorting ....................................................................... 14 4 Fancy Iterators ................................................................. 17 4.1 constant_iterator ....................................................... 17 4.2 counting_iterator ..................................................... 17 4.3 transform_iterator .................................................... 18 4.4 permutation_iterator ................................................ 19 4.5 zip_iterator ................................................................ 19 5 Additional Resources ......................................................... 21 6 Notices ................................................................................ 23 6.1 Notice ....................................................................... 23 6.2 OpenCL ..................................................................... 24 6.3 Trademarks ............................................................... 24 Thrust The API reference guide for Thrust, the CUDA C++ template library. Thrust is a C++ template library for CUDA based on the Standard Template Library (STL). Thrust allows you to implement high performance parallel applications with minimal programming effort through a high-level interface that is fully interoperable with CUDA C. Thrust provides a rich collection of data parallel primitives such as scan, sort, and reduce, which can be composed together to implement complex algorithms with concise, readable source code. By describing your computation in terms of these high-level abstractions you provide Thrust with the freedom to select the most efficient implementation automatically. As a result, Thrust can be utilized in rapid prototyping of CUDA applications, where programmer productivity matters most, as well as in production, where robustness and absolute performance are crucial. This document describes how to develop CUDA applications with Thrust. The tutorial is intended to be accessible, even if you have limited C++ or CUDA experience. Chapter 1. Installation and Versioning Installing the CUDA Toolkit will copy Thrust header files to the standard CUDA include directory for your system. Since Thrust is a template library of header files, no further installation is necessary to start using Thrust. In addition, new versions of Thrust continue to be available online through the GitHub Thrust project page. Chapter 2. Vectors Thrust provides two vector containers, host_vector and device_vector. As the names suggest, host_vector is stored in host memory while device_vector lives in GPU device memory. Thrust's vector containers are just like std::vector in the C++ STL. Like std::vector, host_vector and device_vector are generic containers (able to store any data type) that can be resized dynamically. The following source code illustrates the use of Thrust's vector containers. ```cpp #include <thrust/host_vector.h> #include <thrust/device_vector.h> #include <iostream> int main(void) { // H has storage for 4 integers thrust::host_vector<int> H(4); // initialize individual elements H[0] = 14; H[1] = 20; H[2] = 38; H[3] = 46; // H.size() returns the size of vector H std::cout << "H has size " << H.size() << std::endl; // print contents of H for(int i = 0; i < H.size(); i++) std::cout << "H[" << i << "] = " << H[i] << std::endl; // resize H H.resize(2); std::cout << "H now has size " << H.size() << std::endl; // Copy host_vector H to device_vector D thrust::device_vector<int> D = H; // elements of D can be modified D[0] = 99; D[1] = 88; // print contents of D for(int i = 0; i < D.size(); i++) std::cout << "D[" << i << "] = " << D[i] << std::endl; } (continues on next page) As this example shows, the = operator can be used to copy a host_vector to a device_vector (or vice-versa). The = operator can also be used to copy host_vector to host_vector or device_vector to device_vector. Also note that individual elements of a device_vector can be accessed using the standard bracket notation. However, because each of these accesses requires a call to cudaMemcpy, they should be used sparingly. We'll look at some more efficient techniques later. It's often useful to initialize all the elements of a vector to a specific value, or to copy only a certain set of values from one vector to another. Thrust provides a few ways to do these kinds of operations. ```cpp #include <thrust/host_vector.h> #include <thrust/device_vector.h> #include <thrust/copy.h> #include <thrust/fill.h> #include <thrust/sequence.h> #include <iostream> int main(void) { // initialize all ten integers of a device_vector to 1 thrust::device_vector<int> D(10, 1); // set the first seven elements of a vector to 9 thrust::fill(D.begin(), D.begin() + 7, 9); // initialize a host_vector with the first five elements of D thrust::host_vector<int> H(D.begin(), D.begin() + 5); // set the elements of H to 0, 1, 2, 3, ... thrust::sequence(H.begin(), H.end()); // copy all of H back to the beginning of D thrust::copy(H.begin(), H.end(), D.begin()); // print D for(int i = 0; i < D.size(); i++) { std::cout << "D[" << i << "] = " << D[i] << std::endl; } return 0; } ``` Here we've illustrated use of the fill, copy, and sequence functions. The copy function can be used to copy a range of host or device elements to another host or device vector. Like the corresponding STL function, thrust::fill simply sets a range of elements to a specific value. Thrust's sequence function can be used to create a sequence of equally spaced values. 2.1. Thrust Namespace You'll notice that we use things like `thrust::host_vector` or `thrust::copy` in our examples. The `thrust::` part tells the C++ compiler that we want to look inside the thrust namespace for a specific function or class. Namespaces are a nice way to avoid name collisions. For instance, `thrust::copy` is different from `std::copy` provided in the STL. C++ namespaces allow us to distinguish between these two copy functions. 2.2. Iterators and Static Dispatching In this section we used expressions like `H.begin()` and `H.end()` or offsets like `D.begin() + 7`. The result of `begin()` and `end()` is called an iterator in C++. In the case of vector containers, which are really just arrays, iterators can be thought of as pointers to array elements. Therefore, `H.begin()` is an iterator that points to the first element of the array stored inside the H vector. Similarly, `H.end()` points to the element one past the last element of the H vector. Although vector iterators are similar to pointers they carry more information with them. Notice that we did not have to tell `thrust::fill` that it was operating on a `device_vector` iterator. This information is captured in the type of the iterator returned by `D.begin()` which is different than the type returned by `H.begin()`. When a Thrust function is called, it inspects the type of the iterator to determine whether to use a host or a device implementation. This process is known as static dispatching since the host/device dispatch is resolved at compile time. Note that this implies that there is no runtime overhead to the dispatch process. You may wonder what happens when a "raw" pointer is used as an argument to a Thrust function. Like the STL, Thrust permits this usage and it will dispatch the host path of the algorithm. If the pointer in question is in fact a pointer to device memory then you'll need to wrap it with `thrust::device_ptr` before calling the function. For example: ```cpp size_t N = 10; // raw pointer to device memory int * raw_ptr; cudaMalloc((void **) &raw_ptr, N * sizeof(int)); // wrap raw pointer with a device_ptr thrust::device_ptr<int> dev_ptr(raw_ptr); // use device_ptr in thrust algorithms thrust::fill(dev_ptr, dev_ptr + N, (int) 0); ``` To extract a raw pointer from a `device_ptr` the `raw_pointer_cast` should be applied as follows: ```cpp size_t N = 10; // create a device_ptr thrust::device_ptr<int> dev_ptr = thrust::device_malloc<int>(N); // extract raw pointer from device_ptr int * raw_ptr = thrust::raw_pointer_cast(dev_ptr); ``` Another reason to distinguish between iterators and pointers is that iterators can be used to traverse many kinds of data structures. For example, the STL provides a linked list container (`std::list`) that Thrust, Release 12.2 provides bidirectional (but not random access) iterators. Although Thrust does not provide device implementations of such containers, it is compatible with them. ```cpp #include <thrust/device_vector.h> #include <thrust/copy.h> #include <list> #include <vector> int main(void) { // create an STL list with 4 values std::list<int> stl_list; stl_list.push_back(10); stl_list.push_back(20); stl_list.push_back(30); stl_list.push_back(40); // initialize a device_vector with the list thrust::device_vector<int> D(stl_list.begin(), stl_list.end()); // copy a device_vector into an STL vector std::vector<int> stl_vector(D.size()); thrust::copy(D.begin(), D.end(), stl_vector.begin()); return 0; } ``` For Future Reference: The iterators we’ve covered so far are useful, but fairly basic. In addition to these normal iterators, Thrust also provides a collection of fancy iterators with names like counting_iterator and zip_iterator. While they look and feel like normal iterators, fancy iterators are capable of more exciting things. We’ll revisit this topic later in the tutorial. Chapter 3. Algorithms Thrust provides a large number of common parallel algorithms. Many of these algorithms have direct analogs in the STL, and when an equivalent STL function exists, we choose the name (e.g. thrust::sort and std::sort). All algorithms in Thrust have implementations for both host and device. Specifically, when a Thrust algorithm is invoked with a host iterator, then the host path is dispatched. Similarly, a device implementation is called when a device iterator is used to define a range. With the exception of thrust::copy, which can copy data between host and device, all iterator arguments to a Thrust algorithm should live in the same place: either all on the host or all on the device. When this requirement is violated the compiler will produce an error message. 3.1. Transformations Transformations are algorithms that apply an operation to each element in a set of (zero or more) input ranges and then stores the result in a destination range. One example we have already seen is thrust::fill, which sets all elements of a range to a specified value. Other transformations include thrust::sequence, thrust::replace, and of course thrust::transform. Refer to the documentation for a complete listing. The following source code demonstrates several of the transformation algorithms. Note that thrust::negate and thrust::modulus are known as functors in C++ terminology. Thrust provides these and other common functors like plus and multiplies in the file thrust/functional.h. ```cpp #include <thrust/device_vector.h> #include <thrust/transform.h> #include <thrust/sequence.h> #include <thrust/copy.h> #include <thrust/fill.h> #include <thrust/replace.h> #include <thrust/functional.h> #include <iostream> int main(void) { // allocate three device_vectors with 10 elements thrust::device_vector<int> X(10); thrust::device_vector<int> Y(10); thrust::device_vector<int> Z(10); // initialize X to 0,1,2,3, .... (continues on next page) ``` thrust::sequence(X.begin(), X.end()); // compute Y = -X thrust::transform(X.begin(), X.end(), Y.begin(), thrust::negate<int*>()); // fill Z with twos thrust::fill(Z.begin(), Z.end(), 2); // compute Y = X mod 2 thrust::transform(X.begin(), X.end(), Z.begin(), Y.begin(), thrust::modulus<int>()); // replace all the ones in Y with tens thrust::replace(Y.begin(), Y.end(), 1, 10); // print Y thrust::copy(Y.begin(), Y.end(), std::ostream_iterator<int>(std::cout, "\n")); return 0; } While the functors in thrust\functional.h cover most of the built-in arithmetic and comparison operations, we often want to do something different. For example, consider the vector operation y <- a * x + y where x and y are vectors and a is a scalar constant. This is the well-known SAXPY operation provided by any BLAS library. If we want to implement SAXPY with Thrust we have a few options. The first is to use two transformations (one addition and one multiplication) and a temporary vector filled with the value a. A better choice is to use a single transformation with a user-defined functor that does exactly what we want. We illustrate both approaches in the source code below. ```cpp struct saxpy_functor { const float a; saxpy_functor(float _a) : a(_a) {} __host__ __device__ float operator()(const float& x, const float& y) const { return a * x + y; } }; void saxpy_fast(float A, thrust::device_vector<float>& X, thrust::device_vector<float>& Y) { // Y <- A * X + Y thrust::transform(X.begin(), X.end(), Y.begin(), Y.begin(), saxpy_functor(A)); } void saxpy_slow(float A, thrust::device_vector<float>& X, thrust::device_vector<float>& Y) { thrust::device_vector<float> temp(X.size()); // temp <- A (continues on next page) Both saxpy_fast and saxpy_slow are valid SAXPY implementations, however saxpy_fast will be significantly faster than saxpy_slow. Ignoring the cost of allocating the temp vector and the arithmetic operations we have the following costs: - **Fast SAXPY**: performs 2N reads and N writes - **Slow SAXPY**: performs 4N reads and 3N writes Since SAXPY is memory bound (its performance is limited by memory bandwidth, not floating point performance) the larger number of reads and writes makes saxpy_slow much more expensive. In contrast, saxpy_fast will perform about as fast as SAXPY in an optimized BLAS implementation. In memory bound algorithms like SAXPY it is generally worthwhile to apply kernel fusion (combining multiple operations into a single kernel) to minimize the number of memory transactions. **thrust::transform** only supports transformations with one or two input arguments (e.g. $f(x) \rightarrow y$ and $f(x,x) \rightarrow y$). When a transformation uses more than two input arguments it is necessary to use a different approach. The arbitrary_transformation example demonstrates a solution that uses thrust::zip_iterator and thrust::for_each. ### 3.2. Reductions A reduction algorithm uses a binary operation to reduce an input sequence to a single value. For example, the sum of an array of numbers is obtained by reducing the array with a plus operation. Similarly, the maximum of an array is obtained by reducing with an operator that takes two inputs and returns the maximum. The sum of an array is implemented with thrust::reduce as follows: ```cpp int sum = thrust::reduce(D.begin(), D.end(), (int) 0, thrust::plus<int>()); ``` The first two arguments to reduce define the range of values while the third and fourth parameters provide the initial value and reduction operator respectively. Actually, this kind of reduction is so common that it is the default choice when no initial value or operator is provided. The following three lines are therefore equivalent: ```cpp int sum = thrust::reduce(D.begin(), D.end(), (int) 0, thrust::plus<int>()); int sum = thrust::reduce(D.begin(), D.end(), (int) 0); int sum = thrust::reduce(D.begin(), D.end()) ``` Although thrust::reduce is sufficient to implement a wide variety of reductions, Thrust provides a few additional functions for convenience (like the STL). For example, thrust::count returns the number of instances of a specific value in a given sequence: ```cpp int count = thrust::count(X.begin(), X.end(), value); ``` #include <thrust/count.h> #include <thrust/device_vector.h> // put three 1s in a device_vector thrust::device_vector<int> vec(5, 0); vec[1] = 1; vec[3] = 1; vec[4] = 1; // count the 1s int result = thrust::count(vec.begin(), vec.end(), 1); // result is three Other reduction operations include thrust::count_if, thrust::min_element, thrust::max_element, thrust::is_sorted, thrust::inner_product, and several others. Refer to the documentation for a complete listing. The SAXPY example in the Transformations section showed how kernel fusion can be used to reduce the number of memory transfers used by a transformation kernel. With thrust::transform_reduce we can also apply kernel fusion to reduction kernels. Consider the following example which computes the norm of a vector. #include <thrust/transform_reduce.h> #include <thrust/functional.h> #include <thrust/device_vector.h> #include <thrust/host_vector.h> #include <cmath> // square<T> computes the square of a number f(x) -> x*x template<typename T> struct square { __host__ __device__ T operator()(const T & x) const { return x * x; } }; int main(void) { // initialize host array float x[4] = {1.0, 2.0, 3.0, 4.0}; // transfer to device thrust::device_vector<float> d_x(x, x + 4); // setup arguments square<float> unary_op; thrust::plus<float> binary_op; float init = 0; // compute norm float norm = std::sqrt( thrust::transform_reduce(d_x.begin(), d_x.end(), unary_op, → init, binary_op) ); std::cout << norm << std::endl; (continues on next page) Here we have a unary operator called `square` that squares each element of the input sequence. The sum of squares is then computed using a standard `plus` reduction. Like the slower version of the SAXPY transformation, we could implement `norm` with multiple passes: first a `transform` using `square` or perhaps just `multiplies` and then a `plus` reduction over a temporary array. However this would be unnecessarily wasteful and considerably slower. By fusing the square operation with the reduction kernel we again have a highly optimized implementation which offers the same performance as hand-written kernels. ### 3.3. Prefix-Sums Parallel prefix-sums, or scan operations, are important building blocks in many parallel algorithms such as stream compaction and radix sort. Consider the following source code which illustrates an inclusive scan operation using the default `plus` operator: ```cpp #include <thrust/scan.h> int data[6] = {1, 0, 2, 2, 1, 3}; thrust::inclusive_scan(data, data + 6, data); // in-place scan // data is now {1, 1, 3, 5, 6, 9} ``` In an inclusive scan each element of the output is the corresponding **partial sum** of the input range. For example, `data[2] = data[0] + data[1] + data[2]`. An exclusive scan is similar, but shifted by one place to the right: ```cpp #include <thrust/scan.h> int data[6] = {1, 0, 2, 2, 1, 3}; thrust::exclusive_scan(data, data + 6, data); // in-place scan // data is now {0, 1, 1, 3, 5, 6} ``` So now `data[2] = data[0] + data[1]`. As these examples show, `inclusive_scan` and `exclusive_scan` are permitted to be performed in-place. Thrust also provides the functions `transform_inclusive_scan` and `transform_exclusive_scan` which apply a unary function to the input sequence before performing the scan. Refer to the documentation for a complete list of scan variants. 3.4. Reordering Thrust provides support for partitioning and stream compaction through the following algorithms: - **copy_if**: copy elements that pass a predicate test - **partition**: reorder elements according to a predicate (true values precede false values) - **remove** and **remove_if**: remove elements that fail a predicate test - **unique**: remove consecutive duplicates within a sequence Refer to the documentation for a complete list of reordering functions and examples of their usage. 3.5. Sorting Thrust offers several functions to sort data or rearrange data according to a given criterion. The `thrust::sort` and `thrust::stable_sort` functions are direct analogs of `sort` and `stable_sort` in the STL. ```cpp #include <thrust/sort.h> ... const int N = 6; int A[N] = {1, 4, 2, 8, 5, 7}; thrust::sort(A, A + N); // A is now {1, 2, 4, 5, 7, 8} ``` In addition, Thrust provides `thrust::sort_by_key` and `thrust::stable_sort_by_key`, which sort key-value pairs stored in separate places. ```cpp #include <thrust/sort.h> ... const int N = 6; int keys[N] = {1, 4, 2, 8, 5, 7}; char values[N] = {'a', 'b', 'c', 'd', 'e', 'f'}; thrust::sort_by_key(keys, keys + N, values); // keys is now {1, 2, 4, 5, 7, 8} // values is now {'a', 'c', 'b', 'e', 'f', 'd'} ``` Like their STL brethren, the sorting functions also accept user-defined comparison operators: ```cpp #include <thrust/sort.h> #include <thrust/functional.h> ... const int N = 6; int A[N] = {1, 4, 2, 8, 5, 7}; ``` thrust::stable_sort(A, A + N, thrust::greater<int>()); // A is now {8, 7, 5, 4, 2, 1} Chapter 4. Fancy Iterators Fancy iterators perform a variety of valuable purposes. In this section we’ll show how fancy iterators allow us to attack a broader class of problems with the standard Thrust algorithms. For those familiar with the Boost C++ Library, note that our fancy iterators were inspired by (and generally derived from) those in the Boost Iterator Library. 4.1. constant_iterator Arguably the simplest of the bunch, constant_iterator is simply an iterator that returns the same value whenever we access it. In the following example we initialize a constant iterator with the value 10. ```cpp #include <thrust/iterator/constant_iterator.h> ... // create iterators thrust::constant_iterator<int> first(10); thrust::constant_iterator<int> last = first + 3; first[0] // returns 10 first[1] // returns 10 first[100] // returns 10 // sum of [first, last) thrust::reduce(first, last); // returns 30 (i.e. 3 * 10) ``` Whenever an input sequence of constant values is needed, constant_iterator is a convenient and efficient solution. 4.2. counting_iterator If a sequence of increasing values is required, then counting_iterator is the appropriate choice. Here we initialize a counting_iterator with the value 10 and access it like an array. ```cpp #include <thrust/iterator/counting_iterator.h> ... // create iterators thrust::counting_iterator<int> first(10); thrust::counting_iterator<int> last = first + 3; ``` (continues on next page) 4.3. transform_iterator In the Algorithms section we spoke about kernel fusion, i.e. combining separate algorithms like transform and reduce into a single transform_reduce operation. The transform_iterator allows us to apply the same technique, even when we don't have a special transform_xxx version of the algorithm. This example shows another way to fuse a transformation with a reduction, this time with just plain reduce applied to a transform_iterator. ```cpp #include <thrust/iterator/transform_iterator.h> // initialize vector thrust::device_vector<int> vec(3); // create iterator (type omitted) ... first = thrust::make_transform_iterator(vec.begin(), negate<int>()); ... last = thrust::make_transform_iterator(vec.end(), negate<int>()); first[0] // returns -10 first[1] // returns -20 // sum of [first, last) thrust::reduce(first, last); // returns -60 (i.e. -10 + -20 + -30) ``` Note, we have omitted the types for iterators first and last for simplicity. One downside of transform_iterator is that it can be cumbersome to specify the full type of the iterator, which can be quite lengthy. For this reason, it is common practice to simply put the call to make_transform_iterator in the arguments of the algorithm being invoked. For example, ```cpp // sum of [first, last) thrust::reduce(thrust::make_transform_iterator(vec.begin(), negate<int>()), thrust::make_transform_iterator(vec.end(), negate<int>())); ``` allows us to avoid creating a variable to store first and last. 4.4. permutation_iterator In the previous section we showed how transform_iterator is used to fuse a transformation with another algorithm to avoid unnecessary memory operations. The permutation_iterator is similar: it allows us to fuse gather and scatter operations with Thrust algorithms, or even other fancy iterators. The following example shows how to fuse a gather operation with a reduction. ```cpp #include <thrust/iterator/permutation_iterator.h> ... // gather locations thrust::device_vector<int> map(4); map[0] = 3; map[1] = 1; map[2] = 0; map[3] = 5; // array to gather from thrust::device_vector<int> source(6); source[0] = 10; source[1] = 20; source[2] = 30; source[3] = 40; source[4] = 50; source[5] = 60; // fuse gather with reduction: // sum = source[map[0]] + source[map[1]] + ... int sum = thrust::reduce(thrust::make_permutation_iterator(source.begin(), map. ..begin()), thrust::make_permutation_iterator(source.begin(), map. ..end())); ``` Here we have used the make_permutation_iterator function to simplify the construction of the permutation_iterators. The first argument to make_permutation_iterator is the source array of the gather operation and the second is the list of map indices. Note that we pass in source. begin() for the first argument in both cases, but vary the second argument to define the beginning and end of the sequence. When a permutation_iterator is used as an output sequence of a function it is equivalent to fusing a scatter operation to the algorithm. In general permutation_iterator allows you to operate on a specific set of values in a sequence instead of the entire sequence. 4.5. zip_iterator Keep reading, we’ve saved the best iterator for last! The zip_iterator is an extremely useful gadget: it takes multiple input sequences and yields a sequence of tuples. In this example we “zip” together a sequence of int and a sequence of char into a sequence of tuple<int, char> and compute the tuple with the maximum value. #include <thrust/iterator/zip_iterator.h> ... // initialize vectors thrust::device_vector<int> A(3); thrust::device_vector<char> B(3); // create iterator (type omitted) first = thrust::make_zip_iterator(thrust::make_tuple(A.begin(), B.begin())); last = thrust::make_zip_iterator(thrust::make_tuple(A.end(), B.end())); first[0] // returns tuple(10, 'x') first[1] // returns tuple(20, 'y') first[2] // returns tuple(30, 'z') // maximum of [first, last) thrust::maximum<tuple<int, char>> > binary_op; thrust::tuple<int, char> init = first[0]; thrust::reduce(first, last, init, binary_op); // returns tuple(30, 'z') What makes `zip_iterator` so useful is that most algorithms accept either one, or occasionally two, input sequences. The `zip_iterator` allows us to combine many independent sequences into a single sequence of tuples, which can be processed by a broad set of algorithms. Refer to the arbitrary_transformation example to see how to implement a ternary transformation with `zip_iterator` and for_each. A simple extension of this example would allow you to compute transformations with multiple output sequences as well. In addition to convenience, `zip_iterator` allows us to implement programs more efficiently. For example, storing 3d points as an array of float3 in CUDA is generally a bad idea, since array accesses are not properly coalesced. With `zip_iterator` we can store the three coordinates in three separate arrays, which does permit coalesced memory access. In this case, we use `zip_iterator` to create a virtual array of 3d vectors which we can feed in to Thrust algorithms. Refer to the dot_products_with_zip example for additional details. Chapter 5. Additional Resources This guide only scratches the surface of what you can do with Thrust. The following resources can help you learn to do more with Thrust or provide assistance when problems arise. Comprehensive documentation of Thrust’s API A list of Frequently Asked Questions Collection of example programs We strongly encourage users to subscribe to the thrust-users mailing list. The mailing list is a great place to seek out help from the Thrust developers and other Thrust users. Chapter 6. Notices 6.1. Notice This document is provided for information purposes only and shall not be regarded as a warranty of a certain functionality, condition, or quality of a product. NVIDIA Corporation ("NVIDIA") makes no representations or warranties, expressed or implied, as to the accuracy or completeness of the information contained in this document and assumes no responsibility for any errors contained herein. NVIDIA shall have no liability for the consequences or use of such information or for any infringement of patents or other rights of third parties that may result from its use. This document is not a commitment to develop, release, or deliver any Material (defined below), code, or functionality. NVIDIA reserves the right to make corrections, modifications, enhancements, improvements, and any other changes to this document, at any time without notice. Customer should obtain the latest relevant information before placing orders and should verify that such information is current and complete. NVIDIA products are sold subject to the NVIDIA standard terms and conditions of sale supplied at the time of order acknowledgement, unless otherwise agreed in an individual sales agreement signed by authorized representatives of NVIDIA and customer ("Terms of Sale"). NVIDIA hereby expressly objects to applying any customer general terms and conditions with regards to the purchase of the NVIDIA product referenced in this document. No contractual obligations are formed either directly or indirectly by this document. NVIDIA products are not designed, authorized, or warranted to be suitable for use in medical, military, aircraft, space, or life support equipment, nor in applications where failure or malfunction of the NVIDIA product can reasonably be expected to result in personal injury, death, or property or environmental damage. NVIDIA accepts no liability for inclusion and/or use of NVIDIA products in such equipment or applications and therefore such inclusion and/or use is at customer’s own risk. NVIDIA makes no representation or warranty that products based on this document will be suitable for any specified use. Testing of all parameters of each product is not necessarily performed by NVIDIA. It is customer’s sole responsibility to evaluate and determine the applicability of any information contained in this document, ensure the product is suitable and fit for the application planned by customer, and perform the necessary testing for the application in order to avoid a default of the application or the product. Weaknesses in customer’s product designs may affect the quality and reliability of the NVIDIA product and may result in additional or different conditions and/or requirements beyond those contained in this document. NVIDIA accepts no liability related to any default, damage, costs, or problem which may be based on or attributable to: (i) the use of the NVIDIA product in any manner that is contrary to this document or (ii) customer product designs. No license, either expressed or implied, is granted under any NVIDIA patent right, copyright, or other NVIDIA intellectual property right under this document. Information published by NVIDIA regarding third-party products or services does not constitute a license from NVIDIA to use such products or services or a warranty or endorsement thereof. Use of such information may require a license from a third party under the patents or other intellectual property rights of the third party, or a license from NVIDIA under the patents or other intellectual property rights of NVIDIA. Reproduction of information in this document is permissible only if approved in advance by NVIDIA in writing, reproduced without alteration and in full compliance with all applicable export laws and regulations, and accompanied by all associated conditions, limitations, and notices. THIS DOCUMENT AND ALL NVIDIA DESIGN SPECIFICATIONS, REFERENCE BOARDS, FILES, DRAWINGS, DIAGNOSTICS, LISTS, AND OTHER DOCUMENTS (TOGETHER AND SEPARATELY, “MATERIALS”) ARE BEING PROVIDED “AS IS.” NVIDIA MAKES NO WARRANTIES, EXPRESSED, IMPLIED, STATUTORY, OR OTHERWISE WITH RESPECT TO THE MATERIALS, AND EXPRESSLY DISCLAIMS ALL IMPLIED WARRANTIES OF NONINFRINGEMENT, MERCHANTABILITY, AND FITNESS FOR A PARTICULAR PURPOSE. TO THE EXTENT NOT PROHIBITED BY LAW, IN NO EVENT WILL NVIDIA BE LIABLE FOR ANY DAMAGES, INCLUDING WITHOUT LIMITATION ANY DIRECT, INDIRECT, SPECIAL, INCIDENTAL, PUNITIVE, OR CONSEQUENTIAL DAMAGES, HOWEVER CAUSED AND REGARDLESS OF THE THEORY OF LIABILITY, ARISING OUT OF ANY USE OF THIS DOCUMENT, EVEN IF NVIDIA HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. Notwithstanding any damages that customer might incur for any reason whatsoever, NVIDIA’s aggregate and cumulative liability towards customer for the products described herein shall be limited in accordance with the Terms of Sale for the product. 6.2. OpenCL OpenCL is a trademark of Apple Inc. used under license to the Khronos Group Inc. 6.3. Trademarks NVIDIA and the NVIDIA logo are trademarks or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated. Copyright ©2011-2023, NVIDIA Corporation & Affiliates. All rights reserved
{"Source-Url": "https://docs.nvidia.com/cuda/pdf/Thrust_Quick_Start_Guide.pdf", "len_cl100k_base": 7873, "olmocr-version": "0.1.53", "pdf-total-pages": 28, "total-fallback-pages": 0, "total-input-tokens": 61436, "total-output-tokens": 9353, "length": "2e12", "weborganizer": {"__label__adult": 0.0005125999450683594, "__label__art_design": 0.0006527900695800781, "__label__crime_law": 0.0003426074981689453, "__label__education_jobs": 0.0003135204315185547, "__label__entertainment": 0.0001233816146850586, "__label__fashion_beauty": 0.00023090839385986328, "__label__finance_business": 0.0002493858337402344, "__label__food_dining": 0.0003650188446044922, "__label__games": 0.0019054412841796875, "__label__hardware": 0.01097869873046875, "__label__health": 0.0003001689910888672, "__label__history": 0.00024056434631347656, "__label__home_hobbies": 0.0001499652862548828, "__label__industrial": 0.0007014274597167969, "__label__literature": 0.00022101402282714844, "__label__politics": 0.0002713203430175781, "__label__religion": 0.0006875991821289062, "__label__science_tech": 0.0222930908203125, "__label__social_life": 4.947185516357422e-05, "__label__software": 0.00997161865234375, "__label__software_dev": 0.9482421875, "__label__sports_fitness": 0.000396728515625, "__label__transportation": 0.0005879402160644531, "__label__travel": 0.00022292137145996096}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34371, 0.03119]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34371, 0.49451]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34371, 0.77812]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 0, null], [0, 1815, false], [1815, 1815, null], [1815, 2882, null], [2882, 2882, null], [2882, 3257, null], [3257, 3257, null], [3257, 4658, null], [4658, 6563, null], [6563, 9345, null], [9345, 10497, null], [10497, 12494, null], [12494, 14268, null], [14268, 16775, null], [16775, 18328, null], [18328, 20173, null], [20173, 21667, null], [21667, 21754, null], [21754, 21754, null], [21754, 23222, null], [23222, 24797, null], [24797, 26784, null], [26784, 28528, null], [28528, 29031, null], [29031, 29031, null], [29031, 32355, null], [32355, 34371, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 0, null], [0, 1815, true], [1815, 1815, null], [1815, 2882, null], [2882, 2882, null], [2882, 3257, null], [3257, 3257, null], [3257, 4658, null], [4658, 6563, null], [6563, 9345, null], [9345, 10497, null], [10497, 12494, null], [12494, 14268, null], [14268, 16775, null], [16775, 18328, null], [18328, 20173, null], [20173, 21667, null], [21667, 21754, null], [21754, 21754, null], [21754, 23222, null], [23222, 24797, null], [24797, 26784, null], [26784, 28528, null], [28528, 29031, null], [29031, 29031, null], [29031, 32355, null], [32355, 34371, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 34371, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34371, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34371, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34371, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34371, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34371, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34371, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34371, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34371, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34371, null]], "pdf_page_numbers": [[0, 0, 1], [0, 0, 2], [0, 1815, 3], [1815, 1815, 4], [1815, 2882, 5], [2882, 2882, 6], [2882, 3257, 7], [3257, 3257, 8], [3257, 4658, 9], [4658, 6563, 10], [6563, 9345, 11], [9345, 10497, 12], [10497, 12494, 13], [12494, 14268, 14], [14268, 16775, 15], [16775, 18328, 16], [18328, 20173, 17], [20173, 21667, 18], [21667, 21754, 19], [21754, 21754, 20], [21754, 23222, 21], [23222, 24797, 22], [24797, 26784, 23], [26784, 28528, 24], [28528, 29031, 25], [29031, 29031, 26], [29031, 32355, 27], [32355, 34371, 28]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34371, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
abecdb26786297019fd350182be26af554995d3f
A framework for dynamic data source identification and orchestration on the Web Alexander Berezovskiy University of Southampton University Road, Highfield Southampton, SO17 1BJ, United Kingdom +44 (0)7775 675360 letoosh@letoosh.com Dr Leslie Carr University of Southampton University Road, Highfield Southampton, SO17 1BJ, United Kingdom +44 (0)23 8059 4479 lac@ecs.soton.ac.uk ABSTRACT The current Web offers a very large number of solutions and services, ranging from social networking and content delivery services to business applications and management systems. However, in a general case the solutions provided are largely disintegrated, with each product operating in its own environment. Additionally, many of the products are unknown to the end user and finding the most suitable application is commonly a non-trivial task. The current project is an effort to provide a ubiquitous interface for Web application integration. The suggested approach allows for dynamic identification of the applications most suitable for a given task and access to their data using a unified interface in the REST architectural style. A novel algorithm for identification of the most appropriate data source is introduced within the study. Evaluation of the overall system and the obtained results is provided. Categories and Subject Descriptors H.3.3, H.3.5 [Information Storage and Retrieval]: Information Search and Retrieval – retrieval models, search process, selection process; Online Information Services – data sharing, web-based services. General Terms: Algorithms, Management, Design. Keywords: Information retrieval, web integration, unified data access, data source identification. 1. INTRODUCTION It can be said that there has been an immense increase in the use of social applications in the past several years and people accept online communities as a part of their everyday lives. Additionally, businesses tend to pay more attention to the ways in which online social and management tools can be used [1]. Therefore, many operations involved in processing and manipulation of people's social, professional and personal information are moving onto Web platforms. Whilst this can be seen as a ubiquitous source for data retrieval and processing, it presents new challenges to the ways in which information can be retrieved, composed and manipulated. In a general case, web applications are not interconnected and every product operates in its own environment. With the absence of a flexible and scalable framework, which could be used to interact with any web resource, both developers and end users are forced to switch between applications and interfaces in order to fully benefit from their services. Therefore, the current project is an effort to provide such framework and ubiquitous environment for integration on the Web. 1.1 Definitions For the purpose of the current paper and in order to avoid ambiguity the following definitions need to be introduced within the study: - “Application” is defined as “a software product designed to help users perform a task” - “Service” is defined as “a set of functionality provided by an application aimed at solving one or many related tasks” Therefore, Web applications are accessible via a Web browser and the methods used and applied to such applications present a set of common technologies involved in the development, deployment and exploitation of the applications. In the context of the current paper, Web services represent a set of features provided by a Web application. 1.2 Project goals With the rapid growth of the Web and the advent of associated technologies, it is now possible for Web applications to interact within the context of provided services. However, the methods currently available put a number of limitations on the scope of possible interaction. Therefore, it can be beneficial for the future development of the Web to construct a theoretical and practical framework, which can be used to uniformly interact with any Web application. The current project is an effort to create such framework and provide a ubiquitous environment for dynamic Web integration. More specifically the project intends to focus on: - Dynamic identification of the most suitable service providers (applications), - Provision of a universal interface to access and manipulate the providers' data. The project aims to provide an interface for both developers and ordinary Web users to manipulate the data composed from disparate web sources on the basis of provided services. 2. ANALYSIS The rapid development of the Web and related technologies have reflected the need for a scalable data integration platform in the new emerging products. Examples of such products are the ProgrammableWeb project, the OpenID technology and the OpenSocial platform. Although many of the products are dedicated to the mashup method and are unsuitable for large-scale data integration [2], some technologies such as OpenID and OpenSocial provide a more general approach to the problem. The OpenID initiative suggests a technology to universally authenticate against a web application using existing credentials on some other application [3]. For example, if a user has an account on Google, this can be used as their credentials to log into ProgrammableWeb. Since OpenID specifies a protocol for authentication and is not tied to any specific product, it can be used as a way to enable communication between two applications in the means of identifying a user [4]. Although, the OpenID standard does not provide a framework for full data composition and is limited to authentication routines, it demonstrates the potential for large-scale integration on the Web and the requirement for a universally accessible data. It is stated that OpenID is gaining wide adoption and the number of users is estimated at 500 millions with more than 48,000 OpenID providers [5]. Another popular project aimed at web integration is the OpenSocial platform advocated for by Google. OpenSocial provides a set of APIs to access data of its partner applications. It aims at integration at both functional and data levels, and is focused on social networking platforms. For the purpose of the current study and in order to identify major design issues it is suggested to look at the details of the platform implementation. 2.1 Case Study: OpenSocial The OpenSocial platform aims at universal integration of social networking systems using an approach similar to mashup creation. It utilizes the general concepts of social networking data in an attempt to abstract away from implementation and provide a generic interface to access and manipulate the data [6]. Operations are performed at the level of data entities existing within social networking systems and the relationships between them. The entities include user profiles, their activities, media items (photos, videos or other similar content), messages and more [7]. A unified JavaScript framework is provided to compose data into functional elements, which can be combined into a single page in a form similar to mashup construction and executed at client side. Each of such elements is treated as a separate “gadget”, providing some functionality and can be used separately. Thus, users are free to create their own gadgets and embed them into custom Web pages [8]. Apart from this, information can be gathered from multiple data providers, which can be dynamically integrated into the platform. Each of the providers must implement a specific interface in order for their data to be available within the platform. In general, OpenSocial provides an extensive and rich framework for integration of social networking sites. The outside interface is independent of the underlying data provider and the platform is capable of manipulation on the data in a universal bi-directional manner. However, the system is targeted at social networking resources which limits the scope of application and significantly reduces the flexibility of the system [9]. Additionally, existing applications can not be immediately integrated into the system unless additional modifications are made in order to provide the required interface. Finally, it can be said that the platform lacks serious management and integration routines required for large-scale adoption, such as flexible and unique data addressing and interaction between gadgets [10, 11]. 3. DESIGN 3.1 Overview Due to the nature of the target solution, the overall project can be divided into two major tasks: data source identification and data retrieval. Since the system has to interact with numerous Web resources it is implemented and deployed online as a Web application by itself. The project takes a user-centric approach to the design and implementation. It is implied that users tend to use different Web applications for their needs depending on their location, age, background and other [12]. Therefore, data for different users may be spread across different applications. However, the project does not intend to limit the scope of available data to those attached to a given user. Indeed, any web page can be treated as a web application in the context of the system since the very existence of the page implies the existence of the data available within. The general functional sequence of the system can be defined in the following order. On receiving a data request, the system has to determine which applications to use. Next, a call can be made to the selected application in order to perform the requested action on the data (either retrieve or push information to the resource). Depending on the result of the call and nature of the data request, the system returns the data or reports on the results (Figure 1). In order to uniquely address the type of data required, the project follows a service-oriented architecture, by devising common data patterns for similar applications. For instance, most social networks provide user profile data, which typically contains profile picture, name and associated activity or interests data. The devised patterns are then organized in hierarchical service structures. An example of a devised structure common for many applications on the Web is demonstrated in Figure 2. Thus, every application can be assigned one or many services, each corresponding to some data entity within the application (for example, Facebook provides “Social Networking / Profile / Status” service and related data). There is, however, no intention to predefine a fixed tree of services to be used within the system. Instead, the system aims to provide functionality to modify the set of available services at any time in order to enhance flexibility and accommodate new applications. Dynamic application discovery within the vast amounts of web resources is commonly a non-trivial task and involves multiple data mining techniques, which are usually either computationally expensive or resource exhaustive or both. For the purpose of the current study, application discovery on the Web is not the target objective and outside of the scope of the project. Instead, following the general concept of social interaction on the Web, the project aims to provide features for end users to add new applications to the system. However, the project aims to provide capabilities to identify the most appropriate data resource for a given service within the set of registered applications. The next two sections provide a detailed overview of the system architecture. It is assumed that all data requests received by the system are addressed to a particular service, so that the nature of the expected result is known. ### 3.2 Data source identification As it has been shown in 3.1, people tend to use different applications depending on various parameters, such as their location, background and other. Therefore, in order to compute the most appropriate application for a given service, the system needs to keep track of such parameters and the applications people use for various tasks. Whilst it is apparent that a user's country of residence and language spoken affects their choice of application, the effect of the other parameters is uncertain and requires extensive testing and user survey. During the system design stage it has been chosen to dynamically build a set of user parameters in order to enhance flexibility and effectiveness of the identification process. This way, users are free to provide their details based on the set of available parameters and corresponding options. Each set of details (parameter-option pairs) is then combined into a single data entity corresponding to a user's personal data environment (thereafter referred to as “environment”). Instead of directly assigning user selections to his or her profile, they are assigned to an environment instance, which in turn gets assigned to the user profile. It is expected that every unique set of selections corresponds to a single environment instance. Thus, for example, two users who share the same country and language, and have not provided any other information get the same environment instance assigned as a set of their personal details. This approach allows for more efficient computation of similar users and their preferred applications as it does not directly depend on the number of users, but utilizes only their details (environments) and preferences (usage records). A user may then explicitly choose the applications they use and, if desired, the services used within the applications. Due to hierarchical service model, it can be implied that if a user makes use of some service on a particular application, he or she also uses the child (contained) services within the given one, unless specified otherwise. For example, if the system knows the current user has their social networking profile on LinkedIn, it can be assumed that the user's profile picture, name and similar data is also contained within LinkedIn. As a result, the system would need to consider all the usage records of all users to effectively compute the most appropriate application. However, this makes the computation dependent on the number of users, whilst the actual value is the number of users who use an application in a given environment. Therefore, a separate usage statistics for every application may be built in order to avoid this limitation. The general design of the identification part of the system is shown in Figure 3. ![Figure 2: Data tree example](image) 3.3 Data retrieval When a data request is received, the system initially tries to identify the most appropriate resource to handle the request. When the resource is identified, the system needs to contact the target application and perform the requested operation. However, every application provides a different Application Programming Interface (API) and no common operation can be suitable for all of them. There is usually no semantics provided with APIs, so automatic discovery of provided interfaces can not be achieved. Moreover, many applications do not even provide an API in any form, hence direct data access may not be possible. Therefore, the system has to be flexible enough in order to accommodate any web resource and allow for granular data retrieval without the need to communicate with an API. In order to achieve such level of flexibility, the system architecture provides a framework to dynamically define ways in which an application can be interacted. with. Instead of communicating with an application directly, the system passes the request over to a small utility (thereafter referred to as “binding”), which interacts with the required resource and returns the result of the performed operation back to the system. Every binding is a small software and can be written by any user, in a manner similar to application registration. The system provides a number of standard utilities to simplify the process of binding creation and interact with third-party web resources, their APIs or in the form of web scraping. In order to ensure security of the underlying architecture, bindings are executed in a secure environment, where access is restricted to the routines required for request handling. Additionally, extra adjustments need to be made to ensure bindings do not contain any malicious code, which may be used to intercept users personal information, authentication details or similar information. To provide a functionality to prevent such situations the system implements an additional feature for bindings to be reviewed by administrators. By default, every new or modified binding is put into a “non-approved” state. The system administrators may then review the binding code and either mark it as “safe” or “non-safe”. Thus, only the author of a non-approved binding can trigger its execution prior to review, which may be useful for development and testing purposes. However, some users may still wish to use bindings which have not been reviewed. The system, hence, allows to approve a binding for a user's personal use. However, bindings that have been marked as non-safe are never executed independently of personal approval, unless the author of the binding changes its code in which case the binding becomes a non-reviewed again. Finally, target applications may require additional input in order to process certain requests. Examples of such requests include authorization into an application, user identification (username or user unique ID may be required), uploading a new profile picture (new file required), bookmarking a web page using social bookmarking systems (an URL is required) and other similar tasks. Therefore, the system provides a “Data Interface” entity to maintain sets of required and optional parameters available to pass over to the target application. Bindings, however, are not required to specify an interface in order to operate. Instead, the functionality is provided as a support feature to help enhance interaction between the end user and the target application (data source). Thus, users will be notified if there is any missing parameters that are required by the binding. Therefore, binding developers may ensure that all the required data is provided by a user at the moment of execution and does not cause unpredictable behavior of the target application. The suggested approach provides very high level of flexibility as it allows to interact with any web resource in a unique way, specific to the resource. At the same time, it enhances the system functionality by allowing custom interfaces to be defined by binding developers. When combined with the identification subsystem, the overall architecture is capable of dynamically identifying user's data within the integrated applications. The overall system conceptual design is shown in Figure 4. 3.4 System components In general, due to the nature of the project, the system must be capable to operate at high loads and hence, allow for multiple bindings to be executed at the same time. Thus, the project implements a distributed structure of functional elements in order to achieve this goal. Initially, the elements described in 4.2 and 4.3 are contained within the main operating server (thereafter referred to as “Server”) and the actual binding execution is handled by a separate execution controller (thereafter referred to as “Controller”). Server is capable of communication with multiple Controllers at the same time in the common distributed objects architectural style. Every Controller, in turn contains one or many secure execution environments (thereafter referred to as “Runtime”), which processes binding code for every request. Each Runtime provides a safe environment where untrusted code, such as bindings can be executed. Requests to third-party applications are performed from within the environment. Thus, should a binding fail to complete a request in a reasonable time interval, it will not affect the behavior of the rest of the system. Apart from this, bindings may need to store some data within the system (for example secure tokens to identify itself for the third-party application). Such data may only be accessed by binding developers. At the same time users may want to save their input within the system, to avoid entering the required parameters on every data request. Such user data saved within the system must only be available at the time of execution. Since Controller is in Additionally, it is required to produce effective results prior to reaching a reasonable number of users. Thus, it is possible to redefine usage numbers based on the statistical data of freely available online surveys. Additionally, some applications provide information on their usage demographics. Such data was collected and aggregated in the beginning of the development phase in order to provide initial statistical counts for the search algorithm. At the same time, it can be said that the importance of the statistical data is lower than that of actual users of the system. Additionally, data obtained from registered users of the system is of higher value than the data gathered from those who have used the system either implicitly or directly, but have not actually registered for an account. Therefore, the difference between the value of each dataset is taken into account within the algorithm by defining a weight for each element. The general form of the resulting algorithm: **Total Score for an Application** $a$ is defined as: $$TSA(a) = TRS(a) + ERS(a) + TRAA(a),$$ where: - $TRS(a)$ - Total Relevance Score for $a$ - $ERS(a)$ - Environment Relevance Score for $a$ - $TRAA(a)$ - Total Recommendation based on Application-to-Application score for $a$ **Total Relevance Score** $TRS(a)$ is expanded as: $$TRS(a) = AMS(a)\omega_{ams} + SMS(a)\omega_{sms},$$ where: - $AMS(a)$ - Application Match Score for $a$. This element is calculated differently depending on the request. When requesting a data operation on a given service, $AMS$ is always zero. When the user searches for an application using plain-text search this estimates to the number of matches within an Application title and description. - $SMS(a)$ - Service Match Score for $a$, defined as: $$SMS(a) = \sum_{service} \frac{1}{\text{distance} + 1},$$ where $n = |\text{ParentsOnBranch(required)}| \cup \text{ProvidedServices}(a)$ and - $\omega_{ams}$ and $\omega_{sms}$ are pre-defined weights for $AMS$ and $SMS$ respectively. **Environment Relevance Score** $ERS(a)$ is expanded as: $$ERS(a) = \sum_{e \in E(a)} [EMS(e, Env(u)) \cdot \frac{EUC(a,e)}{ETUC(e)} \cdot \omega_{env}],$$ where: - $E(a)$ - set of environments where $a$ is used - $Env(u)$ - environment instance of the current user - $EMS(e, Env(u))$ - Environment Match Score for $e$ and $Env(u)$. This corresponds to the number of matching (identical) values between the two environment instances • $EUC(a,e)$ - Environment-Application Usage Count for $a$ in $e$, or the number of users in the environment using the application, according to pre-defined statistical data • $ETUC(e)$ - Environment Total Usage Count for $e$, or the total number of users in the environment, according to pre-defined statistical data • $\omega_{rev}$ is a pre-defined weight for Environment Relevance Score. **Total Recommendation based on Application-to-Application score** $TRA4(a)$ is expanded as: $$TRA4 = \frac{\sum_{b \in AS(a)} RAA(a,b)}{\sum_{c \in AS(a)} RAA(a,c)} \cdot \omega_{rev},$$ where: - $A(a)$ - set of applications the user $a$ is known to use - $AR(a)$ - set of applications used along with $a$ - $RAA(a,b)$ - Application-to-Application score for $a$ and $b$ (precomputed) - $\omega_{rev}$ - pre-defined weight for $TRA4$ Therefore, the full expanded form of the resulting algorithm can be written as: $$TSA(a) = AMS(a) \cdot \omega_{env} + SMS(a) \cdot \omega_{ms} +$$ $$+ \sum_{e \in E(a)} \left[ EMS(e, Env(a)) \cdot \frac{EAC(a,e)}{ETC(e)} \cdot \omega_{env} \right] +$$ $$+ \sum_{b \in AS(a)} RAA(a,b) \cdot \omega_{rev} +$$ $$+ \sum_{c \in AS(a)} RAA(a,c) \cdot \omega_{rev}$$ It can be said that the suggested algorithm provides a quite efficient method as its computational complexity does not depend on the number of users active within the system and the algorithm is capable of operation without initial mass of active users. Additionally, it allows to tune the importance of its components, hence providing a way to balance the results towards more dense datasets. Thus, for example, when the system reaches a reasonably high number of users, the statistical component may be completely withdrawn. Finally, it considers two dimensions of datasets by utilizing both user environment data and application usage data. It is stated that multidimensional algorithms are generally more effective than unidimensional and provide more accurate results [16]. Apart from the statistical counts that are predefined within the system, an automatic discovery of partial user data was implemented. On the first visit, the system attempts to detect the visitor's country based on their IP address. Apart from country, the system attempts to detect the user's language based on the Accept-Language header sent by the client's browser. The header is a part of HTTP standard and normally contains a number of preferred languages in a standardized format, which usually correspond to the language of the used browser software. In the meantime, in the context of the algorithm, all applications represent user preferences. Thus, both local and online applications may be treated equally, though differ by type. Therefore, the choice of web browser and operating system may reflect personal preferences of a user. The system attempts to automatically detect the operating system and browser in use based on the standard HTTP User-Agent header sent by the browser, which commonly includes information on the underlying operating system. Such applications are registered within the system as ordinary entities, but placed into a separate category, excluded from the set of recommendations devised by the algorithm. As a result, two environmental parameters (country and language) and two preferred applications (browser and operating system) are detected automatically to create the required minimal dataset. The algorithm is, therefore able to provide initial results without any intervention required from the end user. ### 4.2 Data Interface The data access interface was implemented in the REST (Representational State Transfer) architectural style. Contrast to traditional integration technologies (such as WSDL and UDDI), REST relies on the standard HTTP protocol, does not require a separate resource discovery service, and is easier to implement, find and invoke [17]. Additionally, in the context of the current project, where it must be allowed for a large number of independent queries to be performed at the same time, REST represents a more suitable choice than traditional methods of Web integration [18]. Finally, following the general approach of the project geared towards high flexibility, REST represents a scalable architecture, which may be easily extended for use in a very large number of usage scenarios [19, 20]. In the general form the interface can be written as: ```plaintext http://example.com/service/<service_path>/data[extension] [/id=user_id][/app=application_id][/bind=binding_id] [binding_input_arguments] ``` Where all parameters in $\leftarrow$ are required and all parameters in $\rightarrow$ are optional, and: - `service_path` - hierarchical path to the required service, for example: social-networking/profile/name - `extension` - extension of the required data format (optional). For example, html, xml, text or jpg. - `user_id` - unique ID number of a project user (optional). This may be used to make cross-user calls for data. If not specified the current user is used. If a foreign user profile is specified access must be granted by the foreign user to the current one. - `application_id` - unique ID number of an application (optional). Can be used to specify a different application to the one suggested by the identification algorithm. - `binding_id` - unique ID number of a binding (optional). Allows to specify a different binding to the default one. • binding_input_arguments – a set of key-value pairs to pass to the binding. Uses the standard form of key=value[&key=value][…] If the request does not use the GET method, the set of input arguments is derived from a composition of arguments provided in the URL and the body of the request, where preference is given to the latter. Each of the optional arguments may be specified independently of other arguments. The system will pass all the specified parameters to the target binding. The system response to a data request depends on the result of the operation and the input parameters, such as expected format. The example of an XML response to a request to /service/social-networking/profile/name/data.xml is shown in Figure 6. ```xml <data result="success" code="1"> <title>Data / Name / Profile / Social networking</title> <info> <application id="1" name="Facebook" uri="http://example.com/app/view/1"/> <binding id="8" name="FacebookName" uri="http://example.com/binding/view/8"/> <interface id="53" name="Social networking / Profile / Name: FacebookName" uri="http://example.com/service/social-networking/profile/name/interface/app=1/binding=8"/> <format type="data" name="XML" extension="xml" mimename="text/xml"/> <service id="5" name="Name" treepath='social-networking/profile/name/' uri="/service/social-networking/profile/name/"/> </info> <result>Alex Berezovsky</result> </data> ``` Figure 6: XML data response example 4.3 Performance Although the final system demonstrated good performance results during stress testing period, a large area for improvement was seen in the application of caching framework to the system. Caching was implemented for a large proportion of internal functionality of the system. Apart from standard per user page cache, the major improvements were concerned with the identification algorithm. Thus, the environment match counts (EMS) and application-to-application similarity scores (AMS) were cached, followed by caching of the major algorithm components. The system was adjusted to drop individual cache elements on user actions which may affect the results of the algorithm (for example, on change of user environment). This allowed to dramatically decrease the number of queries to the database required in order to process a request. Generally speaking, due to the granular approach to the caching subsystem, most requests required no queries to be made to the database. The implementation of the caching framework proved to significantly increase response rates of the system and overall stability. The final stress testing demonstrated a 30% increase in the amount of requests processed in a time interval and over 20% increase in stability for the price of only 9% increase in memory usage. During the final stages of implementation, when the project matured to a state of fully functional system, it was decided to undertake a more intensive testing by opening the project to the public. In early March 2010 the project was deployed in the form of a public preview, set up and released. The users of the site were asked to provide feedback and comments on their experience. This testing phase allowed to gather information about the suitability of the implemented system for wide public use. The feedback received was used to make further adjustments to the project code, which mainly involved usability and user interface improvements. Additionally, with the increase in number of users, the implemented algorithm demonstrated improved precision due to the growing data set. This demonstrated the efficiency of the suggested approach for the problem domain. Finally, the attention drawn to the project at the public preview stage demonstrated that the overall system presents a novel solution to web application integration and may have further implications on the common practice of web development. 5. CONCLUSION The current project is an effort to provide a flexible and scalable solution to the problem of large-scale application integration on the Web. It can be said that the resulting system meets the devised set of requirements for such solution and presents a novel approach to the problem. Contrast to the existing products, it allows for granular and flexible access to application data, independently of the internal implementation details. In the meantime, the suggested approach does not limit the scope of data resources available for integration and does not require any adjustments to be made within the applications. In fact, any data available to a user on the Web can be integrated and accessed using the system. Broadly speaking, the domain of applicability of the system is not limited to Web resources, but can be extended to any data entity accessible via the Internet. Finally, the identification algorithm of the project allows to dynamically compute the most appropriate applications for a user. As a side effect the results produced by the algorithm can be used in many other areas, and the algorithm itself may be applicable in a different field of similar problem domain. In conclusion, it can be said that the overall project presents a significant contribution to the research area of dynamic data integration and service orchestration on the Web. A further study of the area may reveal new intriguing challenges which may not yet be predicted. 6. DISCUSSION Whilst the current project provides a way to identify the most appropriate data sources for a given service, it relies on the database of applications manually entered into the system. A rather interesting research challenge would be to achieve automatic discovery of such applications on the Web. An easy way to achieve this would be the use of emerging Semantic Web technologies. However, at the moment of writing there is very little number of applications on the Web that supply semantic data related to the provided services. Although, there is no common approach to achieve application discovery at present, a deeper study of the problem may substantially benefit the current project and the overall research area in general. Apart from this, the capability to extract granular chunks of data from the Web provided by the system may be enhanced with the Semantic Web technologies. Thus, application of the technologies to the data extraction capabilities may allow for dynamic knowledge elicitation and construction of large-scale personal knowledge systems. The implications of such approach are difficult to predict, but it can be said that the resulted system may produce a substantial contribution to the problem of construction of personal agents, expert systems and other products. Finally, despite the system demonstrated good results on the tests, the behavior of the system under high load in a real situation is yet to be seen. This may raise new challenges and open a vast area for improvement. In the meantime, the interface suggested in the current study was devised during the undertaken research on the topic. Introduction of a standardized interface, which may be used to address data entities on web resources may lead to a great degree of distributed integration systems and result in a truly dynamic and personal Web. 7. REFERENCES
{"Source-Url": "https://eprints.soton.ac.uk/271693/1/mashups10submission.pdf", "len_cl100k_base": 6954, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 28298, "total-output-tokens": 8770, "length": "2e12", "weborganizer": {"__label__adult": 0.000324249267578125, "__label__art_design": 0.0007872581481933594, "__label__crime_law": 0.0003657341003417969, "__label__education_jobs": 0.0009212493896484376, "__label__entertainment": 9.703636169433594e-05, "__label__fashion_beauty": 0.0001537799835205078, "__label__finance_business": 0.00025844573974609375, "__label__food_dining": 0.0003094673156738281, "__label__games": 0.00046133995056152344, "__label__hardware": 0.0008654594421386719, "__label__health": 0.0005030632019042969, "__label__history": 0.000354766845703125, "__label__home_hobbies": 6.747245788574219e-05, "__label__industrial": 0.0002968311309814453, "__label__literature": 0.0003578662872314453, "__label__politics": 0.00023615360260009768, "__label__religion": 0.0004096031188964844, "__label__science_tech": 0.053802490234375, "__label__social_life": 0.00010138750076293944, "__label__software": 0.01763916015625, "__label__software_dev": 0.9208984375, "__label__sports_fitness": 0.00017249584197998047, "__label__transportation": 0.0004279613494873047, "__label__travel": 0.0001976490020751953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39579, 0.02846]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39579, 0.23325]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39579, 0.89417]], "google_gemma-3-12b-it_contains_pii": [[0, 4563, false], [4563, 10292, null], [10292, 15522, null], [15522, 20499, null], [20499, 22959, null], [22959, 28395, null], [28395, 34437, null], [34437, 39579, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4563, true], [4563, 10292, null], [10292, 15522, null], [15522, 20499, null], [20499, 22959, null], [22959, 28395, null], [28395, 34437, null], [34437, 39579, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39579, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39579, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39579, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39579, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39579, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39579, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39579, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39579, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39579, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39579, null]], "pdf_page_numbers": [[0, 4563, 1], [4563, 10292, 2], [10292, 15522, 3], [15522, 20499, 4], [20499, 22959, 5], [22959, 28395, 6], [28395, 34437, 7], [34437, 39579, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39579, 0.0]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
106b500d8411a5683f7ffa4cafda7aa9d625cea2
The Fourier transform was originally developed by Joseph Fourier [3] for the study of heat transfer and vibrations. Fourier transforms are currently used in the study of differential equations, approximation theory, quantum mechanics, time-series analysis, implementation of high precision arithmetic, digital signal processing, GPS, sound and video compression, digital telephony and encryption. The fast Fourier transform is a divide and conquer algorithm developed by Cooley and Tukey [1] to efficiently compute a discrete Fourier transform on a digital computer. In 2000 Dongarra and Sullivan listed the fast Fourier transform among the top 10 algorithms of the 20th century [2]. The Discrete Fourier Transform The discrete Fourier transform is given by the matrix-vector multiplication $Ax$ where $A$ is an $N \times N$ matrix with general term given by $a_{kl} = e^{-i2\pi kl/N}$ with $k = 0, 1, \ldots, N - 1$ and $l = 0, 1, \ldots, N - 1$. While standard mathematical notation for matrices and vectors use index variables which range from 1 to $N$, we have shifted the indices by one so that the first column and first row of $A$ are given by $k = 0$ and $l = 0$. Shifting indices in this way is both the natural for the C programming language and the mathematics. This shifted notation for indices will be used throughout our computational study of linear algebra. Define $\overline{A}$ to be the matrix whose entries are exactly the complex conjugates of the entries of $A$. Our first result is **The Fourier Inversion Theorem.** Let $A$ be the $N \times N$ Fourier transform matrix defined above. Then $$A^{-1} = \frac{1}{N} \overline{A}.$$ To see why this formula is true we first prove **The Orthogonality Lemma.** Suppose $l, p \in \{0, 1, \ldots, N - 1\}$, then $$\sum_{q=0}^{N-1} e^{i2\pi(l-p)q/N} = \begin{cases} N & \text{for } l = p \\ 0 & \text{otherwise.} \end{cases}$$ Proof of The Orthogonality Lemma. Since \[ 0 \leq l \leq N - 1 \quad \text{and} \quad -(N - 1) \leq -p \leq 0, \] then \(-(N - 1) \leq l - p \leq N - 1\) and consequently \[ -2\pi \left(1 - \frac{1}{N}\right) \leq 2\pi (l - p)/N \leq 2\pi \left(1 - \frac{1}{N}\right). \] Define \(\omega = e^{i2\pi (l-p)/N}\). Since the only time \(e^{i\theta} = 1\) is when \(\theta\) is a multiple of \(2\pi\), we conclude that \[ \omega = 1 \quad \text{if and only if} \quad l = p. \] Clearly, if \(l = p\) then \[ \sum_{q=0}^{N-1} e^{i2\pi (l-p)q/N} = \sum_{q=0}^{N-1} w^q = \sum_{q=0}^{N-1} 1 = N. \] On the other hand, if \(l \neq p\) then \(\omega \neq 1\). In this case, \[ \omega^N = e^{i2\pi (l-p)} = 1 \] and the geometric sum formula yields that \[ \sum_{q=0}^{N-1} e^{i2\pi (l-p)q/N} = \sum_{q=0}^{N-1} \omega^q = \frac{1 - \omega^N}{1 - \omega} = \frac{1 - 1}{1 - \omega} = 0. \] This finishes the proof of the lemma. We are now ready to explain the Fourier inversion theorem. Proof of The Fourier Inversion Theorem. Let \(b = Ax\) and \(c = \frac{1}{N} \overline{A}b\). Claim that \(c = x\). By definition \[ b_k = \sum_{l=0}^{N-1} e^{-i2\pi kl/N} x_l \quad \text{and} \quad c_p = \frac{1}{N} \sum_{q=0}^{N-1} e^{i2\pi pq/N} b_q. \] Substituting yields \[ c_p = \sum_{q=0}^{N-1} e^{-i2\pi pq/N} \left( \frac{1}{N} \sum_{l=0}^{N-1} e^{i2\pi ql/N} x_l \right) = \frac{1}{N} \sum_{l=0}^{N-1} \left\{ \sum_{q=0}^{N-1} e^{i2\pi (l-p)q/N} \right\} x_l \] \[ = \frac{1}{N} \sum_{l=0}^{N-1} \left( \begin{array}{c} N \quad \text{for} \ l = p \\ 0 \quad \text{otherwise} \end{array} \right) x_l = \frac{N}{N} x_p = x_p. \] This finishes the proof of the theorem. Let’s pause for a moment to implement a computer program that computes the Fourier transform and the inverse Fourier transform directly from the definitions using matrix-vector multiplication. The resulting C code looks like ```c #include <stdio.h> #include <stdlib.h> #include <strings.h> #include <math.h> #include <complex.h> #include "matrixlib.h" void dft(int N, complex x[N], complex b[N]) { bzero(b, sizeof(complex)*N); for (int k=0; k<N; k++) { for (int l=0; l<N; l++) { b[k] += cexp(-I*2*M_PI*k*l/N)*x[l]; } } } void invdft(int N, complex x[N], complex b[N]) { bzero(b, sizeof(complex)*N); for (int k=0; k<N; k++) { for (int l=0; l<N; l++) { b[k] += cexp(I*2*M_PI*k*l/N)*x[l]/N; } } } #define FTSIZE 8 complex X[FTSIZE], B[FTSIZE], C[FTSIZE]; int main() { for (int i=0; i<FTSIZE; i++) { X[i] = 1.0/(i+1) + I*1.0/(FTSIZE-i); } printf("N=%d\n", FTSIZE); printf("X=\n"); cvecprint(FTSIZE, X); dft(FTSIZE, X, B); printf("B=\n"); cvecprint(FTSIZE, B); invdft(FTSIZE, B, C); printf("C=\n"); cvecprint(FTSIZE, C); return 0; } ``` and produces the output \[ N=8 \] \[ X= \] \[ (1 0.125) \\ (0.5 0.142857) \\ (0.333333 0.166667) \\ (0.25 0.2) \\ (0.2 0.25) \\ (0.166667 0.333333) \\ (0.142857 0.5) \\ (0.125 1) \] \[ B= \] \[ (2.71786 2.71786) \\ (-0.0863919 -0.208568) \\ (2.22045e-16 -0.583333) \\ (0.285647 -0.689613) \\ (0.634524 -0.634524) \\ (1.01973 -0.422384) \\ (1.44762 -1.27676e-15) \\ (1.98102 0.820565) \] \[ C= \] \[ (1 0.125) \\ (0.5 0.142857) \\ (0.333333 0.166667) \\ (0.25 0.2) \\ (0.2 0.25) \\ (0.166667 0.333333) \\ (0.142857 0.5) \\ (0.125 1) \] Note that the value for \( c \) is the same as \( x \). This is consistent with the Fourier Inversion Theorem and leads us to believe that the above code is producing correct results. Making sure the code is producing the correct answer is an important first step before any sort of optimization is attempted. We now analyze the performance of the above simple Fourier transform code. Observe that the \texttt{dft} and \texttt{invdft} routines each consist of two loops of length \( N \). As the loops are nested, the resulting number of operations is \( N^2 \). We will obtain a significant performance increase by changing the code to use the fast Fourier transform algorithm which only takes about \( N \log N \) number of operations. Before doing this, we instrument the above slow algorithm with timing routines and also create a parallel version for an example of parallel programming. The modified code looks like ```c #include <stdio.h> #include <stdlib.h> #include <strings.h> ``` 4 ```c #include <math.h> #include <complex.h> #include <cilk/cilk.h> #include "matrixlib.h" #include "tictoc.h" void dft(int N, complex x[N], complex b[N]){ bzero(b, sizeof(complex)*N); for(int k=0; k<N; k++) { for(int l=0; l<N; l++) { b[k] += cexp(-I*2*M_PI*k*l/N) * x[l]; } } } void pdft(int N, complex x[N], complex b[N]){ bzero(b, sizeof(complex)*N); cilk_for(int k=0; k<N; k++) { for(int l=0; l<N; l++) { b[k] += cexp(-I*2*M_PI*k*l/N) * x[l]; } } } #define FTSIZE 4096 complex X[FTSIZE], B[FTSIZE], C[FTSIZE]; int main(){ for(int i=0; i<FTSIZE; i++) { X[i] = 1.0/(i+1)+I*1.0/(FTSIZE-i); } printf("N=%d\n", FTSIZE); tic(); dft(FTSIZE, X, B); double t = toc(); printf("dft took %g seconds.\n", t); tic(); pdft(FTSIZE, X, B); t = toc(); printf("parallel dft took %g seconds.\n", t); return 0; } To use multiple processor cores the only change needed is to add a parallel loop for the matrix multiplication denoted by cilk_for on line 20. On a 2.8Ghz dual-core AMD Athlon64 X2 5400+ based system the above code produces the output ``` N=4096 dft took 3.38255 seconds. parallel dft took 1.69255 seconds. For this system, a factor-of-two performance increase is obtained when switching from one to two CPUs. The same program when run on a 2.4Ghz twelve-core Intel Xeon E5-2620 based system produces the output N=4096 dft took 1.97895 seconds. parallel dft took 0.191574 seconds. In this case the performance increase was about 10.3 times faster, which is 86 percent of the optimal 12 times speedup. One reason a program may not scale linearly is because speed of access to main memory does not increase with the number of cores. Even though there are more cores available to perform the calculation, efficiency can eventually be limited by the read-write speed of main memory. Runtime overhead related to scheduling the parallel cilk_for loop on the multiple cores can also affect parallel efficiency. If you are following these lectures hands-on, please check how well the parallel code scales to multiple cores on your own computer. The Fast Fourier Transform While a factor ten speedup was easy to obtain by parallelizing the slow algorithm, in the case of the Fourier transform much more significant gains can be achieved by using a conquer and divide approach. This is possible because the matrix $A$ corresponding to the Fourier transform has a significant number of symmetries in it based on the factors of the length $N$ of the transform. For simplicity we will assume that $N = 2^n$ for some positive integer $n$. Thus, $N$ is divisible by 2 and we can write $2K = N$. It follows that $$ \sum_{l=0}^{N-1} e^{-i2\pi kl/N} x_l = \sum_{l \text{ even}} e^{-i2\pi kl/N} x_l + \sum_{l \text{ odd}} e^{-i2\pi kl/N} x_l $$ $$ = \sum_{p=0}^{K-1} e^{-i2\pi kp/K} x_{2p} + e^{-i2\pi k/N} \sum_{p=0}^{K-1} e^{-i2\pi kp/K} x_{2p+1} $$ Note that the original Fourier transform of size $N$ has been rewritten as two smaller Fourier transforms of size $K$ which then need to be combined. The combining is done by multiplying the second transform by the factor $e^{-i2\pi k/N}$ for $k = 0, 1, \ldots, N-1$ which results in $N$ additional multiplications. Therefore, the total number of operations has been reduced to $$ K^2 + N + K^2 = 2\left(\frac{N}{2}\right)^2 + N = \frac{1}{2}N^2 + N $$ which is a reduction of almost half the original $N^2$. We are now ready to prove **The Fast Fourier Transform Theorem.** Suppose $N = 2^n$, then the Fourier transform can be computed in $N(1 + \log_2 N)$ number of operations. Proof of The Fast Fourier Transform Transform Theorem. Since \( K = 2^{n-1} \) then \( K \) is either equal to 1 or again divisible by 2. In the case \( K \) is divisible by 2 we compute the resulting Fourier transforms of size \( K \) by further dividing them into Fourier transforms of size \( K/2 \). Continue dividing the resulting Fourier transforms into smaller ones until a Fourier transform of size 1 is reached which is then trivial. For example, after the second division the number of operations is given by \[ \left\{ \left( \frac{K}{2} \right)^2 + K + \left( \frac{K}{2} \right)^2 \right\} + N + \left\{ \left( \frac{K}{2} \right)^2 + K + \left( \frac{K}{2} \right)^2 \right\} = 4\left( \frac{K}{2} \right)^2 + 2K + N = 4\left( \frac{N}{4} \right)^2 + 2N = 2^2(2^{n-2})^2 + 2 \cdot 2^n. \] After, the third division each transform of size \( K/2 \) would then computed using transforms of size \( K/4 \). This results in \[ 8\left( \frac{K}{4} \right)^2 + 3N = 2^3(2^{n-3})^2 + 3 \cdot 2^n \] number of operations. This divide and conquer process may be continued \( n \) times and then yields an algorithm that takes \[ 2^n(2^{n-n})^2 + n \cdot 2^n = 2^n(1 + n) = N(1 + \log_2 N) \] number of operations. We remark that \( N(1 + \log_2 N) \) number of operations can be much smaller than \( N^2 \) when \( N \) is large. When \( N = 4096 \), as used for our previous numerical test, it follows that \[ N(1 + \log_2 N) = 53248 \quad \text{and} \quad N^2 = 16777216. \] Since \( 16777216/53248 \approx 315 \), using the fast Fourier transform has the performance advantage of about 315 additional processor cores when \( N = 4096 \). For larger values of \( N \) the advantages are even more pronounced. When \( N = 65536 \) the slow algorithm takes an impractically long time; for values of \( N \) corresponding to vectors that are sized to the limits of available memory, the fast algorithm is the only way to complete the computation. We finish by presenting a recursive routine to compute the fast Fourier transform. ```c #include <stdio.h> #include <stdlib.h> #include <math.h> #include <complex.h> #include "matrixlib.h" void fft(int N, int s, complex x[], complex b[]){ if(N==1) { ``` ```c b[0]=x[0]; return; if(N%2){ printf("N not divisible by 2!\n"); exit(1); } int K=N/2; fft(K,2*s,&x[0],&b[0]); fft(K,2*s,&x[s],&b[K]); for(int k=0;k<K;k++){ complex even=b[k],odd=b[k+K]; complex w=cexp(-I*2*M_PI*k/N); b[k]=even+w*odd; b[k+K]=even-w*odd; } } #define FTSIZE 8 complex X[FTSIZE],B[FTSIZE]; int main(){ for(int i=0;i<FTSIZE;i++) { X[i]=1.0/(i+1)+I*1.0/(FTSIZE-i); } printf("N=%d\n",FTSIZE); fft(FTSIZE,1,X,B); printf("fft_B=\n"); cvecprint(FTSIZE,B); return 0; } ``` produces the output ``` N=8 fft_B= (2.71786 2.71786) (-0.0863919 -0.208568) (0 -0.583333) (0.285647 -0.689613) (0.634524 -0.634524) (1.01973 -0.422384) (1.44762 5.55112e-17) (1.98102 0.820565) ``` Compare the output for the slow routine to the fast routine. When optimizing an computer program it is important to compared results produced by the optimized code to known correct results. The fact that the output is the same in this case, suggests that the optimized code performs the same calculation as the original program. Although one test case—or even a hundred—would not be sufficient guarantee two different algorithms always produce the same results, such testing is useful and can catch many errors. For now, we assume the code is correct and proceed to check performance by instrumenting the code with timing routines and also creating a parallel version as was done for the slow Fourier transform. The modified code looks like ```c #include <stdio.h> #include <stdlib.h> #include <strings.h> #include <math.h> #include <complex.h> #include <cilk/cilk.h> #include "matrixlib.h" #include "tictoc.h" void fft(int N, int s, complex x[], complex b[]) { if (N==1) { b[0] = x[0]; return; } if (N%2) { printf("N not divisible by 2!\n"); exit(1); } int K=N/2; fft(K, 2+s, &x[0], &b[0]); fft(K, 2+s, &x[s], &b[K]); for (int k=0; k<K; k++) { complex even = b[k], odd = b[k+K]; complex w = cexp(-I*2*M_PI*k/N); b[k] = even+w*odd; b[k+K] = even-w*odd; } } void pfft(int N, int s, complex x[], complex b[]) { if (s>16) { fft(N, s, x, b); return; } if (N==1) { b[0] = x[0]; return; } ``` if(N%2){ printf("N not divisible by 2!\n"); exit(1); } int K=N/2; cilk_spawn pfft(K,2*s,&x[0],&b[0]); pfft(K,2*s,&x[s],&b[K]); cilk_sync; cilk_for(int k=0;k<K;k++){ complex even=b[k],odd=b[k+K]; complex w=cexp(-I*2*M_PI*k/N); b[k]=even+w*odd; b[k+K]=even-w*odd; } #define FTSIZE 524288 complex X[FTSIZE],B[FTSIZE],C[FTSIZE]; int main(){ for(int i=0;i<FTSIZE;i++) { X[i]=1.0/(i+1)+I*1.0/(FTSIZE-i); } printf("N=%d\n",FTSIZE); tic(); fft(FTSIZE,1,X,B); double t=toc(); printf("fft took %g seconds.\n",t); tic(); pfft(FTSIZE,1,X,B); t=toc(); printf("parallel fft took %g seconds.\n",t); return 0; } For the parallel code cilk_spawn in line 43 schedules one of the recursive calls to compute a smaller Fourier transform in a separate worker thread while the current thread recursively computes the other Fourier transform. The cilk_sync on line 45 makes sure both of the recursive calls have completed before the results are combined with a parallel loop in line 46. After pfft recurses 5 times, the stride given by s is equal 32 and 32 parallel tasks have been created to perform the computation. At this point a single call is made to the serial fast Fourier transform, because none of the available computers have more than 32 cores. While switching to the serial algorithm is not strictly necessary, doing so helps reduce scheduling overhead. On a 2.8Ghz dual-core AMD Athlon64 X2 5400+ based system the above code produces the output... We note that the fast Fourier transform performed a transform of size $N = 524288$ faster than the slow algorithm could handle a transform of size $N = 4906$. Again the factor of two parallel speedup occurs when working computing two cores. The same program when run on a 2.4Ghz twelve-core Intel Xeon E5-2620 based system produces the output $$N=524288$$ $$\text{fft took 0.548995 seconds.}$$ $$\text{parallel fft took 0.087564 seconds.}$$ In this case the performance increase was about 6.2 times faster, which is only 52 percent of the optimal 12 times speedup. We observe that the memory access patterns of the fast Fourier transform involve recursively skipping by odd and even indexes. In particular the fast Fourier transform does not access memory sequentially. It is likely that this creates additional pressure on the memory subsystem which limits parallel performance. Again, if you are following these lectures hands-on, please check how well the parallel code scales to multiple cores on your own computer. References
{"Source-Url": "http://fractal.math.unr.edu/~ejolson/466/fourier.pdf", "len_cl100k_base": 5471, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 30652, "total-output-tokens": 6650, "length": "2e12", "weborganizer": {"__label__adult": 0.0004506111145019531, "__label__art_design": 0.0005540847778320312, "__label__crime_law": 0.0005092620849609375, "__label__education_jobs": 0.00095367431640625, "__label__entertainment": 0.0001552104949951172, "__label__fashion_beauty": 0.0002142190933227539, "__label__finance_business": 0.0002741813659667969, "__label__food_dining": 0.0007138252258300781, "__label__games": 0.00087738037109375, "__label__hardware": 0.00554656982421875, "__label__health": 0.0009250640869140624, "__label__history": 0.0004320144653320313, "__label__home_hobbies": 0.00021195411682128904, "__label__industrial": 0.0010137557983398438, "__label__literature": 0.0003769397735595703, "__label__politics": 0.0003969669342041016, "__label__religion": 0.0008893013000488281, "__label__science_tech": 0.28955078125, "__label__social_life": 0.00011467933654785156, "__label__software": 0.007717132568359375, "__label__software_dev": 0.6865234375, "__label__sports_fitness": 0.00045990943908691406, "__label__transportation": 0.0008959770202636719, "__label__travel": 0.0002586841583251953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 17344, 0.08299]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 17344, 0.81385]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 17344, 0.73779]], "google_gemma-3-12b-it_contains_pii": [[0, 1901, false], [1901, 3532, null], [3532, 4734, null], [4734, 6269, null], [6269, 7445, null], [7445, 9931, null], [9931, 12152, null], [12152, 13091, null], [13091, 14457, null], [14457, 15971, null], [15971, 17344, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1901, true], [1901, 3532, null], [3532, 4734, null], [4734, 6269, null], [6269, 7445, null], [7445, 9931, null], [9931, 12152, null], [12152, 13091, null], [13091, 14457, null], [14457, 15971, null], [15971, 17344, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 17344, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 17344, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 17344, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 17344, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 17344, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 17344, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 17344, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 17344, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 17344, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 17344, null]], "pdf_page_numbers": [[0, 1901, 1], [1901, 3532, 2], [3532, 4734, 3], [4734, 6269, 4], [6269, 7445, 5], [7445, 9931, 6], [9931, 12152, 7], [12152, 13091, 8], [13091, 14457, 9], [14457, 15971, 10], [15971, 17344, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 17344, 0.0]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
0e18023613dd58373bf6612398a09aacdea37875
SAD technical report van Halderen, A.W.; de Ronde, J.F.; Beemster, M.; Sloot, P.M.A. Citation for published version (APA): General rights It is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), other than for strictly personal, individual use, unless the work is under an open content license (like Creative Commons). Disclaimer/Complaints regulations If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: https://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible. Commission of the European Communities ************************ ESPRIT III PROJECT NB 6756 ************************ CAMAS COMPUTER AIDED MIGRATION OF APPLICATIONS SYSTEM ************************ CAMAS-TR-2.2.4.3 SAD technical report ************************ Date: March 1994 — Review 3.0 ACE - Univ. of Amsterdam - ESI SA - ESI GmbH - FEGS - PARSYTEC - Univ. of Southampton 1 Introduction The performance modelling of complex programs is a tiresome and error-prone process when done for a large program. Counting the number of instructions in the program, analyzing the program structure and combining them in a detailed time complexity function takes up a lot of human time and because it is a precise but often boring task is likely to be subject to errors. More important, numerical programs can be so large and complex that the one simply loses the overview over the program. Tens of thousands lines of code are not uncommon for numerical programs; tracking the variable usage and subroutine nestings has then become a difficult task and if we want to model the performance characteristics of a program we need to analyze the program structure. With such a large structure, which can be very unclear in dusty deck programs, the modelling task has become problematic. If this work is done only once, it is acceptable to put some effort into it to do it by hand, but in a research or development environment we want to experiment with the different factors that influence the performance. Changing these factors change the time complexity formula, and requires to redo some of the work. When this is the case, it is necessary to create an environment in which most of the mind-boring tasks are automated and which assist the developer in the performance evaluation. Such a tool is described in this report. We describe the performance of a program using a SAD formula, which expresses the time complexity. The SAD formula (as described in [1]) consists of three layers. The first level of the SAD formula describes the execution cost of a basic block, which is simply a summation of the execution cost of the various instructions. The second level adds the control flow to the SAD formula. The execution time of a program is the execution time of each basic block times the number of times each block is executed. The second level SAD formula is therefore a summation of the product of the execution cost of a basic block and a certain factor. The factors are determined by the expressions in if and loop program constructs. The factors in this formula are however not independent, since in the real program the if constructs and loops are also nested and often depend on each other. Therefore we can split the factors in a product of a number of subfactors, where each factor may occur as a multiplication factor of more than one particular basic block. This results in the following generic SAD description: $$SAD_2 = \sum_{i=1}^{N} \prod_{m_i} \prod_{k_i} P_{k_i} X_{m_i} Block_i$$ The sum of the execution costs is denoted by $Block_i$. For instance, for the statements: $$A = 1 + B * 5$$ 1 INTRODUCTION \[ B = A^{**9} \] the Block is equal to: \[ M(\text{addition}) + M(\text{multiplication}) + M(\text{store}) + M(\text{Power}) + M(\text{store}) \] . The \( P_{k_i} \) and \( X_{m_i} \) are the multiplication factors derived from if and loop constructs respectively. In the actual execution their values are fixed for a certain input, but for simulation purposes these can be mapped to probabilistic functions. The third level of the SAD formula is to express the data locality. In this way we will be able to describe the performance of SPMD parallel programs. This level is part of further development. The main task of the tool which we are developing is to generate a SAD formula, suitable for execution time estimation purposes. A SAD formula when directly generated from the entire source code, however may prove too large to handle. Also, the user of these simulation tools would often like to get some more information about the program. This could vary from describing which part of the program results in a certain multiplication factor or to being able to isolate code fragments for smaller simulations or profiling. This makes the tool a more interactive program which forms a base for making a SAD formula which can then be used for simulation. An interactive tool also allows you to annotate the source program with information you can determine, but which cannot be determined in an automatic manner. This may result in a better, smaller SAD formula. In the next two sections the generation of a SAD formula is described, followed by a section about increasing the quality of the SAD formula. 2 THE EXECUTION COST OF A BASIC BLOCK arithmetic and logical A basic computational function, like an addition, multiplication or exponential. procedure call The actions needed to call a function or procedure. This consist of a subroutine call and the management of the arguments. array references The action for referring to a 1, 2 or \( n \) dimensional array. jumping and iteration The actions needed for \( \text{DO} \) loop (in other languages called a \( \text{FOR} \) loop) like initialization and re-iterating or the action to perform a \( \text{GO TO} \). builtin functions intrinsic functions in Fortran or in more general library calls, like \( \log \), \( \exp \), etc. Table 1: A classification of the abstract machine instructions 2 The execution cost of a basic block In the introduction we stated that we wanted to derive a SAD formula, describing the time complexity, from the source code. We must now first ask ourselves what kind of source code. In the field of numeric computing, many applications are still written in Fortran, so Fortran seems to be a logical choice for this project. We could also choose to take the generated machine instructions by a Fortran compiler. This has the potential to be more precise because we have more detailed knowledge of the underlying machine. This however, severely restricts us in experimenting with different machines, because we need to rewrite the entire tool for this. If we base ourself on a high level language, like Fortran, we do not have this limitation. In the same manner it is not sensible to restrict oneself to a particular high level language. Not only because we can then only process this particular language, but also because we run the risk of being so language dependent that we lose sight of the underlying programming constructs. Aiming at the basic programming language constructs helps us writing simpler algorithms for rewriting the original source code into the resulting SAD formula and obtaining further information about the program. It is common practice to define an intermediate representation, which in our case is a very high level description. This representation is close enough to the original input source code to be a straightforward translation—and thus a close match in the performance estimation—and yet general enough to allow easy manipulation. This representation will be introduced gradually and is summarized in appendix A. The first level of the SAD formula expresses the execution cost of basic instructions. The instructions are based upon an abstract machine which closely resembles the actions defined in the high level language, but are detailed enough to express major performance characteristics of common computers. The table 2 gives an indication of the kind of instructions modelled. The choice of the instructions are described in more detail in [5]. It is inevitable that some of these instructions are language dependent, but including some additional instructions for a different programming language does not undermine the notion of being largely language independent. New instructions which are specific to other languages can be added easily, but the main structure of the instruction set remains the same and imperative languages share the same kind of operations. A nice characteristic of this instruction set is that it is machine independent; it is not even restricted to stack-based machines. The execution cost of an expression can easily be determined by counting the oc- currence of the operations in the expression. Since the Parasol machine database has been constructed in such a way to cover at least the main Fortran performance factors, all operations in Fortran find their counterpart in the database. Also, the main intrinsic functions available in Fortran are available in the database, and can thus be treated as the other operations. The fact that the items in the machine database for operations in expressions have been designed for Fortran, is not a limitation of the machine database, since most imperative programming languages provide the same kind of basic operations on ex- pressions (like addition, multiplication, power of, etc). Therefore the arithmetic and logical operations are not restricted to a certain language. 3 Adding control flow A structured program is a program in which every part of the program has only one entry and one exit point. In other words, the execution of every part of the program always begins at a certain statement and it finishes after executing a specific last statement of that part. This implies that, in a structured program, we can only use loops and if-then-else statements. It is not possible to continue execution from the body of the then part of an if statement to the middle of a loop statement. This violates the fact than a loop statement can only be entered through one single point. Generating a time complexity function of a structured program is much simpler than generating it from a program containing complicated go-to structures. Also other analyzing techniques are greatly simplified or depend on the program being highly structured. One of these techniques, symbolic evaluation, we will be using in a later section of this report. Structured programs are simpler for the process later on, but this does imply that we must transform a possible less structured program into a highly structured program. Less structured programs are in fact common usage, even in clear and well written code. Look at the following example, where an element is searched within a certain array: ```fortran DO 10, I=1, 100 IF (SEARCHVALUE.EQ.ARRAY(I)) GOTO 20 10 CONTINUE C element not found, do something I = -1 20 WRITE(*,*) 'index of value in array: ', I ``` Line 20 can be reached when the loop ends (after I becomes 100) or when the go-to is taken (the element has found), the program is thus not highly structured although this is very common programming style because this decreases the average time complexity by a factor two. The above example can be rewritten into a highly structured program. A generic algorithm is described in the next subsection and two special cases are described in the next section. Before we describe the restructuring algorithm, we define the input of the algorithm. The input is not directly Fortran source code, but an intermediate format. This format is simple and portable to use between different imperative languages. The translation from Fortran to this intermediate language is relatively easy and not described in this report. First we define the control flow statements of this intermediate format. There are two kind of control statements for structured programs, the choice (better known as the if statement) and three loop iteration statements. The if statement has three parameters: The first argument is an expression, which outcome must be boolean and 3 ADDING CONTROL FLOW on which outcome is determined which of the other two arguments must be evaluated. The second and third arguments contain the corresponding statements which must be evaluated for a true value (then-branch) or false value (else-branch). We define three kinds of loop iteration statements: 1. A loop statement at which the number of iterations of the body is known just before the loop is executed. In Fortran, this is the \texttt{DO}-statement; other languages sometimes call it a \texttt{FOR} statement (N.B. the \texttt{C for} statement is \textit{not} such a statement). We call this kind of loop statement a \texttt{LOOP}. A \texttt{LOOP} has two parameters: An expression, which is evaluated just before the loop is entered, and which outcome determines how many times the statements in the loop are executed. The second parameter contains the statements over which the iteration process must occur. Note the fact that we don’t include a loop counter in this statement. The initialisation of this counter, and it’s evaluation during the loop must be defined separately. 2. A loop statement where just before a next (or first) iteration is determined whether or not to continue the iteration process. This is known as an while-do loop. The \texttt{WHILEDO} loop statement also has two parameters, an expression and a body of statements. Now however, the expression is evaluated every time the loop iterates. 3. A loop statement where the decision to make a next iteration is taken after an iteration has been done, is known as a do-while loop. The \texttt{DOWHILE} loop statement, like the \texttt{WHILEDO}, also has two parameters: a body of statements and an expression which evaluates to either true or false during execution. Although it is sufficient to provide only a do-while or while-do iteration loop, it is simpler to define these basic three. The structure of most programs is maintained by defining separate loop statements and we don’t need unnecessary algorithms, which may lead to an exponential increase of the input program size\cite{7}. The other operations in a structured program can be expressed as an expression. Expressions can have side effects, as long as the execution always continues at the statement after the expression. One of the main side effects we have in mind is the assignment operator. The statement \texttt{EVAL} accepts one argument; an expression which is evaluated. Less structured programs contain jumps from one statement to another, therefore we need an additional statement which expresses such a jump. The \texttt{GOTO} statement is just that, it accepts one argument which is a label. The jump during execution is to a corresponding label which is defined by the special statement \texttt{LABEL}, which also accepts a label as its argument. After the restructuring process, an equivalent program is generated which does not contain any \texttt{GOTO} or \texttt{LABEL} statements anymore. 3.1 Restructuring the source code The restructuring algorithm we use is described in [2]. It is based on two transformations ($T_1$ and $T_2$) and a node splitting routine which operate on a control flow graph. The nodes in the graph are basic blocks and the edges are expressions, which are both already present in that form in the intermediate representation. The $T_1 - T_2$ algorithm is not very complicated and yet efficient. The main drawback is the fact that large boolean expression are generated which can be reduced. Using a simple boolean reduction algorithm, which only handles the cases which are generated by the $T_1 - T_2$ algorithm this drawback is overcome. 3.2 Generating the SAD formula Structured programs have very simple rules for generating a time complexity function. When a loop statement iterates $n$ times, the time complexity function of the entire loop statement is the time complexity function of the body of the loop statement multiplied by $n$. Since our restructuring process led to a structured program, we can use these simple rules, formalized in table 3.2 by the translation function $te[]$, to handle the generation of the complexity function SAD. There is however a problem with the procedure we used. First, we restructured the program and then we generated a time complexity formula. Unfortunately the restructuring process is a transformation of the original program to a new, but semantically equivalent program. Semantic equivalence does however not mean that the time complexity formula of the original and transformed program are equal. There are two main reasons for the arisen problems: 1. The transformation of an irreducible control flow graph leads to the duplication of program code. Although for this has no direct influence on the time complexity function as defined in table 3.2, there is a concern for misinterpretation of the formula when we want to extend it to model cache influences. In the time complexity formula we generate using the rules so far, we have no indication that two blocks of statements originate from the same original source. This would lead to a possible increase of the execution cost we may have a more expensive instruction-fetch when the statements are cached. Currently, we have no real support for multi level memories, so caches are not modelled. We do however want to keep this option open. 2. Not only statements are duplicated, but also expressions and new conditional statements may be introduced. Let us look at an earlier example program: ```plaintext DO 10, I=1, 100 IF (SEARCHVALUE.EQ.ARRAY(I)) GOTO 20 10 CONTINUE ...``` Currently, we have no real support for multi level memories, so caches are not modelled. We do however want to keep this option open. 3 ADDING CONTROL FLOW 3.2 Generating the SAD formula \[ \begin{align*} \text{tc}[[\text{WHILEDO expression statements}]] & \rightarrow \text{tc}[[\text{expression}]] + M(\text{jump}) + P(\text{expression} = \text{true}) \times (\text{tc}[[\text{statements}]] + \text{tc}[[\text{expression}]] + M(\text{jump})) \\ \text{tc}[[\text{DOWHILE expression statements}]] & \rightarrow \text{tc}[[\text{statements}]] + \text{tc}[[\text{expression}]] + P(\text{expression} = \text{true}) \times (\text{tc}[[\text{statements}]] + \text{tc}[[\text{expression}]] + M(\text{jump})) \\ \text{tc}[[\text{LOOP expression statements}]] & \rightarrow \text{tc}[[\text{expression}]] + M(\text{overhead}) + P(\text{expression} = \text{true}) \times (\text{tc}[[\text{statements}]] + M(\text{iteration})) \\ \text{tc}[[\text{IF expression stats}_\text{then, stats}_\text{else}]] & \rightarrow P(\text{expression} = \text{true}) \times \text{tc}[[\text{stats}_\text{then}]] + P(\text{expression} = \text{false}) \times \text{tc}[[\text{stats}_\text{else}]] + M(\text{jump}) \\ \text{tc}[[\text{EVAL expression}]] & \rightarrow \text{tc}[[\text{expression}]] \end{align*} \] \(M(\text{jump}), M(\text{overhead})\) and \(M(\text{iteration})\) are machine parameters from the machine database. Table 2: Rules for translating into a time complexity formula. \(\text{tc}[[\text{expression}]]\) is defined by the summation of the execution cost of all its operations (as mentioned in the previous section) After the transformation into a structured program this program looks about this: \[ i = 1 \] \[ DO \] \[ b = (\text{searchvalue} = \text{array}[i]) \] \[ \text{IF not } b \text{ THEN} \] \[ i = i + 1 \] \[ \text{ENDIF} \] \[ \text{WHILE } (i \leq 100) \text{ and not } b \] \[ ... \] Although the semantics are the same, the time complexity formula will be what we expected, since the do-loop has changed into a DO-WHILE and the two (sub)expressions not \( b \) were not present in the original code. The solution is simple, separate the generation of the SAD time complexity formula into two processes. One process is done before the transformation of the original source code into structured code, the other is done after the transformation. The determination of the execution cost parameters not be done on structured code. We can annotate the program with the execution cost parameters from the machine database using a special `COST` statement. The cost statement has one argument, an expression containing a summation of the cost parameters of a previous statement. If the program is annotated with these special statements, these statements follow the same structure as the original program. The rules for the insertion of these annotation statements are described in table 3.2. The transformation to structured code can now be performed without the problems described earlier, since the execution cost has already been extracted and because the `COST` statements act just like `EVAL` statements, the transformation results in a semantic equivalent program. Now we use rules similar to the old rules to extract the SAD formula, but now we must not evaluate the `te[ ]` transformation of expressions and statements within loops and if statements, but collect the `COST` statements from the program (See table 3.2). 3 ADDING CONTROL FLOW 3.2 Generating the SAD formula \[ \text{cst}[[ \text{GOTO} ]] \] \rightarrow \text{COST} \ M(\text{jump}) \\ \quad \text{GOTO} \ \text{label} \\ \[ \text{cst}[[ \text{LABEL} ]] \] \rightarrow \text{LABEL} \ \text{label} \\ \[ \text{cst}[[ \text{EVAL} ]] \] \rightarrow \text{COST} \ \text{te}[[ \text{expression} ]] \\ \quad \text{EVAL} \ \text{expression} \\ \[ \text{cst}[[ \text{IF} ]] \] \rightarrow \text{COST} \ \text{te}[[ \text{expression} ]] + \text{M}(\text{jump}) \\ \quad \text{IF} \ \text{expression} \\ \quad \text{cst}[[ \text{statements1} ]] \\ \quad \text{cst}[[ \text{statements2} ]] \\ \[ \text{cst}[[ \text{DOWHILE} ]] \] \rightarrow \text{DOWHILE} \ \text{cst}[[ \text{statements} ]] + \text{COST} \ \text{te}[[ \text{expression} ]] + \text{M}(\text{jump}) \\ \quad \text{expression} \\ \[ \text{cst}[[ \text{WHILEDO} ]] \] \rightarrow \text{WHILEDO} \ \text{cst}[[ \text{statements} ]] + \text{COST} \ \text{te}[[ \text{expression} ]] + \text{M}(\text{jump}) \\ \quad \text{expression} \\ \[ \text{cst}[[ \text{LOOP} ]] \] \rightarrow \text{COST} \ \text{te}[[ \text{expression} ]] + \text{M}(\text{overhead}) \\ \quad \text{LOOP} \ \text{expression} \\ \quad \text{cst}[[ \text{statements} ]] \\ \quad ; \\ \quad \text{COST} \ \text{M}(\text{iteration}) \\ Table 3: Rules for annotating the program with COST statements \[ \text{col}[[\text{WHILEDO expression statements}]] \rightarrow \] \\ \quad \text{P}(\text{expression} = \text{true}) \times \text{col}[[\text{statements}]] \\ \[ \text{col}[[\text{DOWHILE expression statements}]] \rightarrow \] \\ \quad (1 + \text{P}(\text{expression} = \text{true})) \times \text{col}[[\text{statements}]] \\ \[ \text{col}[[\text{LOOP expression statements}]] \rightarrow \] \\ \quad \text{expression} \times \text{col}[[\text{statements}]] \\ \[ \text{col}[[\text{IF expression statements1 statements2}]] \rightarrow \] \\ \quad \text{P}(\text{expression} = \text{true}) \times \text{col}[[\text{statements1}]] + \text{P}(\text{expression} = \text{false}) \times \text{col}[[\text{statements2}]] \\ \[ \text{col}[[\text{COST expression}]] \rightarrow \text{expr} \] \\ \[ \text{col}[[\text{statements}]] \rightarrow \sum \text{every statement} \\ Table 4: Rules for collecting the COST statements into a time complexity formula. 4 Symbolic execution The generation process as defined in the previous section generates a SAD time complexity formula which consists of machine parameters multiplied by factors. These factors are derived from the source code and are either expressions from loops or are probabilistic functions notating the chance that the expressions in if, dowhile or whiledo are true or false. There is very little we can do with solely this SAD formula, since we have lost all other information about the algorithm and thus about the values of the factors in the SAD formula. The factors are all probabilistic functions of the problem size and statistical properties of the input data, so although we have many factors in the SAD formula (every control flow statement results in such a factor) we actually have only a few parameters on which the time complexity depends. Many factors in our current SAD formula have therefore the same value or are dependent on each other. The current SAD formula is therefore not useful and we must find a way in which more information can be extracted from the formula. Before collecting the cost statements we can perform an action known as symbolic execution [3, 6]. In symbolic execution we execute all steps in the program, as if we were simulating the execution, but without any input data. Therefore, many variables have no value and expressions cannot be evaluated entirely. Not being able to evaluate the expressions entirely means that we don’t know how many times loops must be executed and which branch to take in an if-then-else expression. The idea to track the definitions of variables and to substitute the use of of those variables with the expression that is assigned to that variable, but only if that definition is still valid within the context of the usage of that variable. The symbolic execution is a transformation of the intermediate structured representation of the program which result is a semantic equivalent program in which the usage of the variables has been replaced by what we know about that variable. We use a simple algorithm for performing the symbolic execution action. The algorithm steps through the program replacing variables in expressions with their definition in a global name space. If an assignment statement is encountered the definition of a variable is added to the name space. Furthermore it “kills” the definition of a variable by removing it from the name space when that becomes necessary. This is necessary when for example the definition of a variable took place within the then branch of an if-statement and the variable is used also after the if-statement is encountered. More complicated rules are necessary for loop-statements. The algorithm is a bit more intelligent by annotating every expression which is used as a definition of a variable with the location of that expression within the source code. This annotation is then also used in the instantiation of the variables where they are used. Variables which could not be instantiated (because their value was removed by “Kill”) also receive an annotation of the possible locations where it may have received its value. This in effect brings the program in a static single assignment form ([4]), which may be used later by other analyzing techniques. 4.1 Simulation In the previous section we have shown how we can perform a symbolic execution on the input program. This symbolic execution action takes place before “collecting” the cost statements, by which we generate the time complexity function. The time complexity formula generated now consists of far less parameters than before. Because of many factors the value is now known, or we have found that some factors really depend on only a few parameters (e.g. the problem size). We may also know by performing the symbolic execution, that a factor can be only within a certain range. This can be used by a simulator together with the definition of the value of certain variables as being stochastic functions to generate random values for those variables. With the instantiation of random values for the variables in the expressions of the factors of the SAD formula, we can calculate one single sample in a simulation process. An expert user must define the stochastic functions of the variables which were left unevaluated by the symbolic execution process. The tool that is being developed helps such a user in tracking down these variables and defining their stochastic function on the proper position within the algorithm. 5 Status It is important to realize that total automatic performance prediction is provable impossible, because it is similar to the solving the halting problem [7]. However we believe that the tool we are developing can assist an expert user in understanding large programs by creating an environment in which he can track down the usage of variables know which possible values it can have. The interactive tool which incorporates the ideas within this report has been built, together with a necessary graphical interface to it. The tool is still in a alpha phase and needs extensive testing with some medium and large size programs to test the robustness and to find out if the current model is sufficient to allow for meaningful simulation purposes. The further development of the tool will include the incorporation of SPMD message passing constructs, which form level three of SAD. A Intermediate format Although the intermediate format used in this paper is actually stored in some datastructures, rather than generated in ASCII form, a formal description can give a good overview over the expressive power and completeness of the format. The table below describes the BNF grammar of the intermediate representation. expression ::= Const constant | Var variable | Elt array-variable index | Op1 unary-operator expression | Op2 binary-operator expression-1 expression-2 index ::= expression statements ::= statement^+ statement ::= GOTO label | EVAL expression | ASSIGN expression_{eval} expression_{eval} | COST expression | IF expression stats_{then} stats_{else} | LOOP expression statements | WHILDO expression statements | DOWHILE statements expression Any (sub)expression is furthermore tagged with the type (integer, real, double precision, etc) of the expression as well as with the location where the subexpression originates from (as described in 4). References
{"Source-Url": "https://pure.uva.nl/ws/files/2073299/25754_Sloot52SAD.pdf", "len_cl100k_base": 6784, "olmocr-version": "0.1.53", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 36079, "total-output-tokens": 8114, "length": "2e12", "weborganizer": {"__label__adult": 0.0003211498260498047, "__label__art_design": 0.00033402442932128906, "__label__crime_law": 0.0003161430358886719, "__label__education_jobs": 0.0006251335144042969, "__label__entertainment": 7.659196853637695e-05, "__label__fashion_beauty": 0.00014293193817138672, "__label__finance_business": 0.0002722740173339844, "__label__food_dining": 0.00034928321838378906, "__label__games": 0.0005192756652832031, "__label__hardware": 0.0015268325805664062, "__label__health": 0.000457763671875, "__label__history": 0.0002818107604980469, "__label__home_hobbies": 0.0001327991485595703, "__label__industrial": 0.0006160736083984375, "__label__literature": 0.00023818016052246096, "__label__politics": 0.00023567676544189453, "__label__religion": 0.0004773139953613281, "__label__science_tech": 0.05072021484375, "__label__social_life": 7.289648056030273e-05, "__label__software": 0.006153106689453125, "__label__software_dev": 0.93505859375, "__label__sports_fitness": 0.00031948089599609375, "__label__transportation": 0.0005917549133300781, "__label__travel": 0.0002073049545288086}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31804, 0.02069]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31804, 0.55104]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31804, 0.89008]], "google_gemma-3-12b-it_contains_pii": [[0, 1152, false], [1152, 1537, null], [1537, 1537, null], [1537, 4266, null], [4266, 5896, null], [5896, 9036, null], [9036, 10184, null], [10184, 12813, null], [12813, 15795, null], [15795, 18562, null], [18562, 20044, null], [20044, 21871, null], [21871, 24202, null], [24202, 27497, null], [27497, 28733, null], [28733, 29623, null], [29623, 31387, null], [31387, 31804, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1152, true], [1152, 1537, null], [1537, 1537, null], [1537, 4266, null], [4266, 5896, null], [5896, 9036, null], [9036, 10184, null], [10184, 12813, null], [12813, 15795, null], [15795, 18562, null], [18562, 20044, null], [20044, 21871, null], [21871, 24202, null], [24202, 27497, null], [27497, 28733, null], [28733, 29623, null], [29623, 31387, null], [31387, 31804, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31804, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31804, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31804, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31804, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31804, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31804, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31804, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31804, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31804, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31804, null]], "pdf_page_numbers": [[0, 1152, 1], [1152, 1537, 2], [1537, 1537, 3], [1537, 4266, 4], [4266, 5896, 5], [5896, 9036, 6], [9036, 10184, 7], [10184, 12813, 8], [12813, 15795, 9], [15795, 18562, 10], [18562, 20044, 11], [20044, 21871, 12], [21871, 24202, 13], [24202, 27497, 14], [27497, 28733, 15], [28733, 29623, 16], [29623, 31387, 17], [31387, 31804, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31804, 0.0]]}
olmocr_science_pdfs
2024-12-12
2024-12-12
6bb7248c8477ca94aba0519c6830adb713fcc690
Algorithms for NLP Language Modeling II Yulia Tsvetkov – CMU Slides: Taylor Berg-Kirkpatrick – CMU/UCSD Dan Klein – UC Berkeley My legal name is Alexander Perchov. But all of my many friends dub me Alex, because that is a more flaccid-to-utter version of my legal name. Mother dubs me Alexi-stop-spleening-me!, because I am always spleening her. If you want to know why I am always spleening her, it is because I am always elsewhere with friends, and disseminating so much currency, and performing so many things that can spleen a mother. Father used to dub me Shapka, for the fur hat I would don even in the summer month. He ceased dubbing me that because I ordered him to cease dubbing me that. It sounded boyish to me, and I have always thought of myself as very potent and generative. The Noisy-Channel Model - We want to predict a sentence given acoustics: \[ w^* = \arg \max_w P(w|a) \] - The noisy-channel approach: \[ w^* = \arg \max_w P(w|a) \] \[ = \arg \max_w P(a|w)P(w)/P(a) \] \[ = \arg \max_w P(a|w)P(w) \] Likelihood Acoustic model: HMMs over word positions with mixtures of Gaussians as emissions Prior Language model: Distributions over sequences of words (sentences) ASR Components Language Model source \( P(w) \) \[ \text{best } w \] Acoustic Model channel \( P(a|w) \) \[ \text{observed } a \] \[ \text{argmax } P(w|a) = \text{argmax } P(a|w)P(w) \] MT System Components Language Model source P(e) Translation Model cchannel P(f|e) encoder decoder best e argmax P(e|f) = argmax P(f|e)P(e) the station signs are in deep in english -14732 the stations signs are in deep in english -14735 the station signs are in deep into english -14739 the station 's signs are in deep in english -14740 the station signs are in deep in the english -14741 the station signs are indeed in english -14757 the station 's signs are indeed in english -14760 the station signs are indians in english -14790 the station signs are indian in english -14799 the stations signs are indians in english -14807 the stations signs are indians and english -14815 Language Models - A language model is a distribution over sequences of words (sentences) \[ P(w) = P(w_1 \ldots w_n) \] - What’s \( w \)? (closed vs open vocabulary) - What’s \( n \)? (must sum to one over all lengths) - Can have rich structure or be linguistically naive - Why language models? - Usually the point is to assign high weights to plausible sentences (cf acoustic confusions) - This is not the same as modeling grammaticality Language Models - Language models are distributions over sentences \[ P(w_1 \ldots w_n) \] - N-gram models are built from local conditional probabilities \[ P(w_1 \ldots w_n) = \prod_i P(w_i|w_{i-k} \ldots w_{i-1}) \] - The methods we’ve seen are backed by corpus n-gram counts \[ \hat{P}(w_i|w_{i-1}, w_{i-2}) = \frac{c(w_{i-2}, w_{i-1}, w_i)}{c(w_{i-2}, w_{i-1})} \] Kneser-Ney Smoothing - All orders recursively discount and back-off: \[ P_k(w|\text{prev}_{k-1}) = \frac{\max(c'(\text{prev}_{k-1}, w) - d, 0)}{\sum_v c'(\text{prev}_{k-1}, v)} + \alpha_{k-1} P_{k-1}(w|\text{prev}_{k-2}) \] - Alpha is a function computed to make the probability normalize (see if you can figure out an expression). - For the highest order, \(c'\) is the token count of the n-gram. For all others it is the context fertility of the n-gram: (see Chen and Goodman p. 18) \[ c'(w) = |\{w_{k-1} : c(w_{k-1}, w) > 0\}| \] - The unigram base case does not need to discount. - Variants are possible (e.g. different \(d\) for low counts) What’s in an N-Gram? ▪ Just about every local correlation! ▪ Word class restrictions: “will have been ___” ▪ Morphology: “she ___”, “they ___” ▪ Semantic class restrictions: “danced the ___” ▪ Idioms: “add insult to ___” ▪ World knowledge: “ice caps have ___” ▪ Pop culture: “the empire strikes ___” ▪ But not the long-distance ones ▪ “The computer which I had just put into the machine room on the fifth floor ___.” The LAMBADA dataset Context: “Why?” “I would have thought you’d find him rather dry,” she said. “I don’t know about that,” said Gabriel. “He was a great craftsman,” said Heather. “That he was,” said Flannery. Target sentence: “And Polish, to boot,” said _______. Target word: Gabriel [Paperno et al. 2016] Other Techniques? - Lots of other techniques - Maximum entropy LMs (soon) - Neural network LMs (soon) - Syntactic / grammar-structured LMs (much later) How to Build an LM Tons of Data - Good LMs need lots of n-grams! [Brants et al, 2007] Storing Counts - Key function: map from n-grams to counts <table> <thead> <tr> <th>n-gram</th> <th>Count</th> </tr> </thead> <tbody> <tr> <td>searching for the best</td> <td>192593</td> </tr> <tr> <td>searching for the right</td> <td>45805</td> </tr> <tr> <td>searching for the cheapest</td> <td>44965</td> </tr> <tr> <td>searching for the perfect</td> <td>43959</td> </tr> <tr> <td>searching for the truth</td> <td>23165</td> </tr> <tr> <td>searching for the &quot;</td> <td>19086</td> </tr> <tr> <td>searching for the most</td> <td>15512</td> </tr> <tr> <td>searching for the latest</td> <td>12670</td> </tr> <tr> <td>searching for the next</td> <td>10120</td> </tr> <tr> <td>searching for the lowest</td> <td>10080</td> </tr> <tr> <td>searching for the name</td> <td>8402</td> </tr> <tr> <td>searching for the finest</td> <td>8171</td> </tr> <tr> <td>...</td> <td>...</td> </tr> </tbody> </table> All Our N-gram are Belong to You Thursday, August 3, 2006 Posted by Alex Franz and Thorsten Brants, Google Machine Translation Team Here at Google Research we have been using word n-gram models for a variety of R&D projects, such as statistical machine translation, speech recognition, spelling correction, entity detection, information extraction, and others. While such models have usually been estimated from training corpora containing at most a few billion words, we have been harnessing the vast power of Google’s datacenters and distributed processing infrastructure to process larger and larger training corpora. We found that there is no data like more data, and scaled up the size of our data by one order of magnitude, and then another, and then one more – resulting in a training corpus of one trillion words from public Web pages. We believe that the entire research community can benefit from access to such massive amounts of data. It will advance the state of the art, it will focus research in the promising direction of large-scale, data-driven approaches, and it will allow all research groups, no matter how large or small their computing resources, to play together. That’s why we decided to share the enormous dataset with everyone. We processed 1,924,656,767,229 words of running text and are publishing the counts for all 1,116,407,664 five-word sequences that appear at least 40 times. There are 13,988,301 unique words, after discarding words that appear less than 200 times. Watch for an announcement at the Linguistics Data Consortium (LDC), who will be distributing it soon, and then order your set of DVDs. And let us hear from you – we’re excited to hear what you will do with the data, and we’re always interested in feedback about this dataset, or other potential datasets that might be useful for the research community. Update (22 Sept. 2004): The LDC now has the data available in their catalog. The counts are as follows: - File sizes: approx. 24 GB compressed (gzip’ed) text files - Number of tokens: 1,604,906,767,229 - Number of sentences: 85,139,665,584 Example: Google N-Grams Google N-grams - 14 million \(< 2^{24}\) words - 2 billion \(< 2^{31}\) 5-grams - 770,000 \(< 2^{20}\) unique counts - 4 billion n-grams total - 24GB compressed - 6 DVDs Efficient Storage Naïve Approach c(cat) = 12 \quad \text{hash(cat)} = 2 c(the) = 87 \quad \text{hash(the)} = 2 c(and) = 76 \quad \text{hash(and)} = 5 c(dog) = 11 \quad \text{hash(dog)} = 7 c(have) = ? \quad \text{hash(have)} = 2 HashMap<String, Long> ngram_counts; String ngram1 = "I have a car"; String ngram2 = "I have a cat"; ngram_counts.put(ngram1, 123); ngram_counts.put(ngram2, 333); HashMap<String[], Long> ngram_counts; String[] ngram1 = {"I", "have", "a", "car"}; String[] ngram2 = {"I", "have", "a", "cat"}; ngram_counts.put(ngram1, 123); ngram_counts.put(ngram2, 333); A Simple Java Hashmap? ``` HashMap<String[], Long> ngram_counts; ``` Per 3-gram: 1. Pointer = 8 bytes 2. Map.Entry = 8 bytes (obj) + 3x8 bytes (pointers) 3. Long = 8 bytes (obj) + 8 bytes (long) 4. String[] = 8 bytes (obj) + 3x8 bytes (pointers) … at best Strings are canonicalized Total: > 88 bytes Obvious alternatives: - Sorted arrays - Open addressing # Open Address Hashing <table> <thead> <tr> <th>Key</th> <th>Value</th> </tr> </thead> <tbody> <tr> <td>0</td> <td></td> </tr> <tr> <td>1</td> <td></td> </tr> <tr> <td>2</td> <td></td> </tr> <tr> <td>3</td> <td></td> </tr> <tr> <td>4</td> <td></td> </tr> <tr> <td>5</td> <td></td> </tr> <tr> <td>6</td> <td></td> </tr> <tr> <td>7</td> <td></td> </tr> </tbody> </table> - $c(\text{cat}) = 12$ hash(\text{cat}) = 2 - $c(\text{the}) = 87$ hash(\text{the}) = 2 - $c(\text{and}) = 76$ hash(\text{and}) = 5 - $c(\text{dog}) = 11$ hash(\text{dog}) = 7 ### Open Address Hashing <table> <thead> <tr> <th>word</th> <th>hash function</th> <th>key</th> <th>value</th> </tr> </thead> <tbody> <tr> <td>cat</td> <td>2</td> <td></td> <td></td> </tr> <tr> <td>the</td> <td>2</td> <td></td> <td>cat</td> </tr> <tr> <td>and</td> <td>5</td> <td></td> <td>the</td> </tr> <tr> <td>dog</td> <td>7</td> <td></td> <td>and</td> </tr> <tr> <td>have</td> <td>2</td> <td></td> <td>dog</td> </tr> </tbody> </table> - $c(\text{cat}) = 12$, $\text{hash(cat)} = 2$ - $c(\text{the}) = 87$, $\text{hash(the)} = 2$ - $c(\text{and}) = 76$, $\text{hash(and)} = 5$ - $c(\text{dog}) = 11$, $\text{hash(dog)} = 7$ - $c(\text{have}) = ?$, $\text{hash(have)} = 2$ Open Address Hashing c(cat) = 12 hash(cat) = 2 <table> <thead> <tr> <th>key</th> <th>value</th> </tr> </thead> <tbody> <tr> <td>0</td> <td></td> </tr> <tr> <td>1</td> <td></td> </tr> <tr> <td>2</td> <td></td> </tr> <tr> <td>3</td> <td></td> </tr> <tr> <td>4</td> <td></td> </tr> <tr> <td>5</td> <td></td> </tr> <tr> <td>6</td> <td></td> </tr> <tr> <td>7</td> <td></td> </tr> <tr> <td>...</td> <td>...</td> </tr> <tr> <td>14</td> <td></td> </tr> <tr> <td>15</td> <td></td> </tr> </tbody> </table> c(the) = 87 hash(the) = 2 c(and) = 76 hash(and) = 5 c(dog) = 11 hash(dog) = 7 Efficient Hashing - Closed address hashing - Resolve collisions with chains - Easier to understand but bigger - Open address hashing - Resolve collisions with probe sequences - Smaller but easy to mess up - Direct-address hashing - No collision resolution - Just eject previous entries - Not suitable for core LM storage HashMap<String[], Long> ngram_counts; Per 3-gram: - 1 Pointer = 8 bytes - 1 Map.Entry = 8 bytes (obj) + 3x8 bytes (pointers) - 1 Long = 8 bytes (obj) + 8 bytes (long) - 1 String[] = 8 bytes (obj) + 3x8 bytes (pointers) ... at best Strings are canonicalized Total: > 88 bytes Obvious alternatives: - Sorted arrays - Open addressing Got 3 numbers under $2^{20}$ to store? 7 0...00111 1 0...00001 15 0...01111 20 bits 20 bits 20 bits Fits in a primitive 64-bit long n-gram encoding 15176595 = 20 bits 20 bits 20 bits the cat laughed 233 count 32 bytes → 8 bytes c(the) = 23135851162 < 2^{35} 35 bits to represent integers between 0 and 2^{35} 60 bits \[15176595\] n-gram encoding 35 bits \[233\] count Example: Google N-Grams Google N-grams - 14 million < $2^{24}$ words - 2 billion < $2^{31}$ 5-grams - 770,000 < $2^{20}$ unique counts - 4 billion n-grams total - 24GB compressed - 6 DVDs # unique counts = 770000 < 2^{20} 20 bits to represent ranks of all counts <table> <thead> <tr> <th>rank</th> <th>count</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>1</td> </tr> <tr> <td>1</td> <td>2</td> </tr> <tr> <td>2</td> <td>51</td> </tr> <tr> <td>3</td> <td>233</td> </tr> </tbody> </table> 15176595 n-gram encoding → 60 bits 3 ranks → 20 bits So Far ### Vocabulary <table> <thead> <tr> <th>word</th> <th>id</th> </tr> </thead> <tbody> <tr> <td>cat</td> <td>0</td> </tr> <tr> <td>the</td> <td>1</td> </tr> <tr> <td>was</td> <td>2</td> </tr> <tr> <td>ran</td> <td>3</td> </tr> </tbody> </table> ### N-gram encoding scheme - **unigram:** \( f(id) = id \) - **bigram:** \( f(id_1, id_2) = ? \) - **trigram:** \( f(id_1, id_2, id_3) = ? \) ### Counts lookup <table> <thead> <tr> <th>rank</th> <th>freq</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>1</td> </tr> <tr> <td>1</td> <td>2</td> </tr> <tr> <td>2</td> <td>51</td> </tr> <tr> <td>3</td> <td>233</td> </tr> </tbody> </table> ### Count DB - **unigram** - 1607820 0381 - 15176595 0051 - 15176583 0076 - 16576628 0021 - 15176600 0018 - 16089320 0171 - 15176583 0039 - 14980420 0030 - 15020330 0482 - **bigram** - 1607820 0381 - 15176595 0051 - 15176583 0076 - 16576628 0021 - 15176600 0018 - 16089320 0171 - 15176583 0039 - 14980420 0030 - 15020330 0482 - **trigram** - 1607820 0381 - 15176595 0051 - 15176583 0076 - 16576628 0021 - 15176600 0018 - 16089320 0171 - 15176583 0039 - 14980420 0030 - 15020330 0482 # Hashing vs Sorting <table> <thead> <tr> <th>Sorting</th> <th>query: 15176595</th> </tr> </thead> <tbody> <tr> <td>( c )</td> <td>( val )</td> </tr> <tr> <td>15176583</td> <td>0076</td> </tr> <tr> <td>15176595</td> <td>0051</td> </tr> <tr> <td>15176600</td> <td>0018</td> </tr> <tr> <td>16078820</td> <td>0381</td> </tr> <tr> <td>16089320</td> <td>0171</td> </tr> <tr> <td>16576628</td> <td>0021</td> </tr> <tr> <td>16980420</td> <td>0030</td> </tr> <tr> <td>17020330</td> <td>0482</td> </tr> <tr> <td>17176583</td> <td>0039</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Hashing</th> <th></th> </tr> </thead> <tbody> <tr> <td>( c )</td> <td>( val )</td> </tr> <tr> <td>16078820</td> <td>0381</td> </tr> <tr> <td>15176595</td> <td>0051</td> </tr> <tr> <td>15176583</td> <td>0076</td> </tr> <tr> <td>16576628</td> <td>0021</td> </tr> <tr> <td>15176600</td> <td>0018</td> </tr> <tr> <td>16089320</td> <td>0171</td> </tr> <tr> <td>15176583</td> <td>0039</td> </tr> <tr> <td>14980420</td> <td>0030</td> </tr> <tr> <td>15020330</td> <td>0482</td> </tr> </tbody> </table> Context Tries Tries [Image of a diagram showing the structure of a trie with examples of words and their associated values] [Reference: Hsu and Glass 2008] Context Encodings Google N-grams - 10.5 bytes/n-gram - 37 GB total [Many details from Pauls and Klein, 2011] ## Context Encodings <table> <thead> <tr> <th>1-grams</th> <th>2-grams</th> <th>3-grams</th> </tr> </thead> <tbody> <tr> <td><strong>w</strong></td> <td><strong>val</strong></td> <td><strong>c</strong></td> </tr> <tr> <td>675</td> <td>0127</td> <td>15176582</td> </tr> <tr> <td>676</td> <td>9008</td> <td>15176583</td> </tr> <tr> <td>677</td> <td>0137</td> <td>15176584</td> </tr> <tr> <td>678</td> <td>0090</td> <td>15176585</td> </tr> <tr> <td>679</td> <td>1192</td> <td>15176586</td> </tr> <tr> <td>680</td> <td>0050</td> <td>15176587</td> </tr> <tr> <td>681</td> <td>0040</td> <td>15176588</td> </tr> <tr> <td>682</td> <td>0201</td> <td>15176589</td> </tr> <tr> <td>683</td> <td>3010</td> <td>15176590</td> </tr> </tbody> </table> **Notes:** - **w** represents the word. - **val** represents the value. - **c** represents the context. - **20 bits** indicates the length of the context. - **64 bits** indicates the length of the word. - **20 bits** indicates the length of the context. - **42276773** represents the index of the word in the encoding. N-Gram Lookup this is a 4-gram $$p(0121\ 0374\ 0045\ 4820) = -8.7$$ Compression Idea: Differential Compression <table> <thead> <tr> <th>$c$</th> <th>$w$</th> <th>$val$</th> </tr> </thead> <tbody> <tr> <td>15176585</td> <td>678</td> <td>3</td> </tr> <tr> <td>15176587</td> <td>678</td> <td>2</td> </tr> <tr> <td>15176593</td> <td>678</td> <td>1</td> </tr> <tr> <td>15176613</td> <td>678</td> <td>8</td> </tr> <tr> <td>15179801</td> <td>678</td> <td>1</td> </tr> <tr> <td>15176585</td> <td>680</td> <td>298</td> </tr> <tr> <td>15176589</td> <td>680</td> <td>1</td> </tr> </tbody> </table> <table> <thead> <tr> <th>$\Delta c$</th> <th>$\Delta w$</th> <th>$val$</th> </tr> </thead> <tbody> <tr> <td>+2</td> <td>+0</td> <td>2</td> </tr> <tr> <td>+6</td> <td>+0</td> <td>1</td> </tr> <tr> <td>+40</td> <td>+0</td> <td>8</td> </tr> <tr> <td>+188</td> <td>+0</td> <td>1</td> </tr> <tr> <td>+2</td> <td>+0</td> <td>1</td> </tr> </tbody> </table> | $|\Delta w|$ | $|\Delta c|$ | $|val|$ | |-------------|-------------|-------| | 40 | 24 | 3 | | 3 | 2 | 3 | | 3 | 2 | 3 | | 9 | 2 | 6 | | 12 | 2 | 3 | | 36 | 4 | 15 | | 6 | 2 | 3 | | 15176585 | 678 | 563097887 | 956 | 3 | +2 | +0 | 2 | +6 | +0 | 1 | +40 | +2 | 8 | ... | Variable Length Encodings Encoding “9” 000 1001 Length in Unary Number in Binary Google N-grams - 2.9 bytes/n-gram - 10 GB total [Elias, 75] Speed-Ups Rolling Queries this is + a 4-gram 12438010 0045 4820 12438010 0045 a this is a 15176583 4820 is a 4-gram <table> <thead> <tr> <th>c</th> <th>w</th> <th>val</th> <th>suffix</th> </tr> </thead> <tbody> <tr> <td>15176583</td> <td>682</td> <td>0065</td> <td>00000480</td> </tr> <tr> <td>15176595</td> <td>682</td> <td>0808</td> <td>00000675</td> </tr> <tr> <td>15176600</td> <td>682</td> <td>0012</td> <td>00000802</td> </tr> <tr> <td>16078820</td> <td>682</td> <td>0400</td> <td>00001321</td> </tr> </tbody> </table> LM \[ val \] -7.8 LM \[ val \] -5.4 14986731 Idea: Fast Caching <table> <thead> <tr> <th>n-gram</th> <th>probability</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>124 80 42 1243</td> </tr> <tr> <td>1</td> <td>37 2435 243 21</td> </tr> <tr> <td>2</td> <td>804 42 4298 43</td> </tr> </tbody> </table> hash(124 80 42 1243) = 0 hash(1423 43 42 400) = 1 LM can be more than 10x faster w/ direct-address caching Approximate LMs - Simplest option: hash-and-hope - Array of size $K \sim N$ - (optional) store hash of keys - Store values in direct-address - Collisions: store the max - What kind of errors can there be? - More complex options, like bloom filters (originally for membership, but see Talbot and Osborne 07), perfect hashing, etc Homework 1 Overview
{"Source-Url": "http://demo.clab.cs.cmu.edu/11711fa18/slides/lecture_3_language_models_2.pdf", "len_cl100k_base": 6638, "olmocr-version": "0.1.50", "pdf-total-pages": 48, "total-fallback-pages": 0, "total-input-tokens": 44521, "total-output-tokens": 8696, "length": "2e12", "weborganizer": {"__label__adult": 0.0005087852478027344, "__label__art_design": 0.0005955696105957031, "__label__crime_law": 0.0006780624389648438, "__label__education_jobs": 0.004253387451171875, "__label__entertainment": 0.00023829936981201172, "__label__fashion_beauty": 0.0002613067626953125, "__label__finance_business": 0.0006437301635742188, "__label__food_dining": 0.0004892349243164062, "__label__games": 0.0006852149963378906, "__label__hardware": 0.0009756088256835938, "__label__health": 0.0010833740234375, "__label__history": 0.0004444122314453125, "__label__home_hobbies": 0.00015282630920410156, "__label__industrial": 0.0006694793701171875, "__label__literature": 0.002704620361328125, "__label__politics": 0.0005488395690917969, "__label__religion": 0.0007023811340332031, "__label__science_tech": 0.261474609375, "__label__social_life": 0.0002980232238769531, "__label__software": 0.0240478515625, "__label__software_dev": 0.697265625, "__label__sports_fitness": 0.0003819465637207031, "__label__transportation": 0.0005588531494140625, "__label__travel": 0.00019752979278564453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16934, 0.15819]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16934, 0.17127]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16934, 0.72864]], "google_gemma-3-12b-it_contains_pii": [[0, 130, false], [130, 791, null], [791, 1199, null], [1199, 1392, null], [1392, 1539, null], [1539, 2080, null], [2080, 2527, null], [2527, 2905, null], [2905, 3557, null], [3557, 3990, null], [3990, 4300, null], [4300, 4459, null], [4459, 4478, null], [4478, 4547, null], [4547, 5450, null], [5450, 7550, null], [7550, 7747, null], [7747, 7765, null], [7765, 7981, null], [7981, 8145, null], [8145, 8337, null], [8337, 8698, null], [8698, 9095, null], [9095, 9673, null], [9673, 10015, null], [10015, 10353, null], [10353, 10688, null], [10688, 10688, null], [10688, 10827, null], [10827, 10926, null], [10926, 11069, null], [11069, 11260, null], [11260, 11494, null], [11494, 12403, null], [12403, 12926, null], [12926, 12940, null], [12940, 13084, null], [13084, 13195, null], [13195, 14601, null], [14601, 14671, null], [14671, 14683, null], [14683, 15692, null], [15692, 15839, null], [15839, 15849, null], [15849, 16264, null], [16264, 16574, null], [16574, 16915, null], [16915, 16934, null]], "google_gemma-3-12b-it_is_public_document": [[0, 130, true], [130, 791, null], [791, 1199, null], [1199, 1392, null], [1392, 1539, null], [1539, 2080, null], [2080, 2527, null], [2527, 2905, null], [2905, 3557, null], [3557, 3990, null], [3990, 4300, null], [4300, 4459, null], [4459, 4478, null], [4478, 4547, null], [4547, 5450, null], [5450, 7550, null], [7550, 7747, null], [7747, 7765, null], [7765, 7981, null], [7981, 8145, null], [8145, 8337, null], [8337, 8698, null], [8698, 9095, null], [9095, 9673, null], [9673, 10015, null], [10015, 10353, null], [10353, 10688, null], [10688, 10688, null], [10688, 10827, null], [10827, 10926, null], [10926, 11069, null], [11069, 11260, null], [11260, 11494, null], [11494, 12403, null], [12403, 12926, null], [12926, 12940, null], [12940, 13084, null], [13084, 13195, null], [13195, 14601, null], [14601, 14671, null], [14671, 14683, null], [14683, 15692, null], [15692, 15839, null], [15839, 15849, null], [15849, 16264, null], [16264, 16574, null], [16574, 16915, null], [16915, 16934, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 16934, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 16934, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16934, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16934, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 16934, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16934, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16934, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16934, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 16934, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 16934, null]], "pdf_page_numbers": [[0, 130, 1], [130, 791, 2], [791, 1199, 3], [1199, 1392, 4], [1392, 1539, 5], [1539, 2080, 6], [2080, 2527, 7], [2527, 2905, 8], [2905, 3557, 9], [3557, 3990, 10], [3990, 4300, 11], [4300, 4459, 12], [4459, 4478, 13], [4478, 4547, 14], [4547, 5450, 15], [5450, 7550, 16], [7550, 7747, 17], [7747, 7765, 18], [7765, 7981, 19], [7981, 8145, 20], [8145, 8337, 21], [8337, 8698, 22], [8698, 9095, 23], [9095, 9673, 24], [9673, 10015, 25], [10015, 10353, 26], [10353, 10688, 27], [10688, 10688, 28], [10688, 10827, 29], [10827, 10926, 30], [10926, 11069, 31], [11069, 11260, 32], [11260, 11494, 33], [11494, 12403, 34], [12403, 12926, 35], [12926, 12940, 36], [12940, 13084, 37], [13084, 13195, 38], [13195, 14601, 39], [14601, 14671, 40], [14671, 14683, 41], [14683, 15692, 42], [15692, 15839, 43], [15839, 15849, 44], [15849, 16264, 45], [16264, 16574, 46], [16574, 16915, 47], [16915, 16934, 48]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16934, 0.2989]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
edd8eab4c50f9b640f8ade0eaa28cf3db289a27a
Fibonacci heaps Definition. A Fibonacci heap (Fredman & Tarjan 1987) is a data structure consisting of key-item pairs, \[ \{ (k_1, x_1), (k_2, x_2), \ldots, (k_n, x_n) \} \] where - the keys \( k_i \) are drawn from a totally-ordered domain, and are not necessarily distinct, while - the items \( x_i \) are unordered, and are all distinct. A Fibonacci heap supports the following operations: - **Create ()** Create and return an empty heap. - **Insert \((k, x, H)\)** Insert pair \((k, x)\) into heap \( H \), and return a pointer to the pair. Assumes item \( x \) is not in \( H \). - **Delete \((p, H)\)** Delete pair \( p \) (returned from a prior Insert) from \( H \). Definition, cont'd - **Union (A, B)**: Destructively merge the two heaps A and B into one heap, and return the merged heap. Assumes the items in A, B are disjoint. - **Minimum (H)**: Return the item in H whose associated key is minimum. - **Extract (H)**: Delete the pair in H with minimum key. - **Decrease (p, k, H)**: Decrease the key of pair p in H to k. If k is not less than the current key of p, this has no effect. ### Remarks - Fibonacci heaps support these operations in the following time bounds, using $\Theta(n)$ space: <table> <thead> <tr> <th>Operation</th> <th>Worst-case</th> <th>Amortized</th> </tr> </thead> <tbody> <tr> <td>Create</td> <td>$\Theta(1)$</td> <td>$\Theta(1)$</td> </tr> <tr> <td>Insert</td> <td>$\Theta(1)$</td> <td>$\Theta(1)$</td> </tr> <tr> <td>Union</td> <td>$\Theta(1)$</td> <td>$\Theta(1)$</td> </tr> <tr> <td>Minimum</td> <td>$\Theta(1)$</td> <td>$\Theta(1)$</td> </tr> <tr> <td>Decrease</td> <td>$\Theta(1)$</td> <td>$\Theta(1)$</td> </tr> <tr> <td>Delete</td> <td>$\Theta(n)$</td> <td>$\Theta(\log n)$</td> </tr> <tr> <td>Extract</td> <td>$\Theta(n)$</td> <td>$\Theta(\log n)$</td> </tr> </tbody> </table> Fibonacci heaps are efficient in an amortized sense, but not in the worst case. This is usually sufficient, as most applications of heaps perform a series of operations. (A heap is now known that achieves the amortized time of Fibonacci heaps in the worst-case (Brodal 1996), but it is very complicated.) Remarks, cont'd • Fibonacci heaps are most useful when a few Extract and many Decrease operations are performed. This occurs in many graph algorithms, such as for shortest paths and minimum spanning trees. • Fibonacci heaps tend to be slow in practice. Another data structure, the pairing heap (Fredman, Sedgewick, Sleator, Tarjan 1986), tends to be fast in practice, but is theoretically less efficient (Fredman 1998). • Note that in contrast to search trees, Fibonacci heaps do not support the Find operation on a key. (Heap elements are accessed through the pointer returned by an Insert.) Representation - A Fibonacci heap is represented as a forest of heap-ordered trees with arbitrary number of children: ``` min ↓ one heap ``` - Along root-to-leaf paths, keys are nondecreasing. - Roots and children are unordered. - For the heap, we maintain: - a linked list of roots, - a pointer to the root with minimum key, and - a count of the total number of nodes. - At each node, we maintain: - a linked list of children, - a pointer to its parent, - a count of the number of children (called its degree), and - a bit (called its mark). Potential function • Let \[ R(H) := \text{number of roots in } H, \text{ and} \] \[ M(H) := \text{number of marked nodes in } H. \] In the analysis, we use the following potential function: \[ \Phi(H) := R(H) + 2M(H). \] The potential of a collection of heaps (when performing Union operations) is the sum of their potentials. • Note that \[ \Phi(H_0) = 0, \text{ and} \] \[ \Phi(H_i) \geq 0 \text{ for all } i \geq 1, \] so \( \Phi \) is a valid potential function. Implementation of simple operations Create - Just create an empty heap $H$. - This takes $\Theta(1)$ worst-case time, and since $\Delta M(H) = 0$, it also takes $\Theta(1)$ amortized time. Insert - Just create a new tree consisting of the single unmarked node $(k, x)$, add it to the forest for $H$, and update the minimum root pointer by comparing with $k$: $$ \begin{align*} H & \quad \rightarrow \quad H' \\ \bigtriangleup \quad \cdots \quad \bigtriangleup & \quad \rightarrow \quad \bigtriangleup \quad \cdots \quad \bigtriangleup (k, x) \end{align*} $$ - This takes amortized time: $$ \Theta(1) + \frac{\Delta R(H) + 2 \Delta M(H)}{\Delta M(H)} = \Theta(1) + (1 + 2 \cdot 0) = \Theta(1). $$ Simple operations, cont. Minimum - Just return the item at the root pointed at by the minimum pointer. - This takes $\Theta(1)$ amortized time, as $\Delta \Xi = 0$. Union - Just concatenate the root lists of heaps $A$ and $B$ to form $H$, and compare the minimum pointers of $A$ and $B$ to determine $H$'s minimum. - This takes $\Theta(1)$ amortized time, since $$ \Delta \Xi = \Xi(H) - (\Xi(A) + \Xi(B)) \\ = 0. $$ Extract Idea - Remove from \( H \) the node \( r \) pointed at by the minimum root pointer. - Concatenate the children of \( r \) onto \( H \)'s root list. - Scan the root list to determine the new minimum root. - Since scanning is expensive, consolidate the root list by making some roots children of others. (This reduces the number of roots in the forest to speed up future Extracts.) - Pictorially, Implementation of Extract procedure Consolidate(H) begin \[ d := f(\text{Size}(H)) \] \[ A := \text{Array}(0, d) \] \[ \Theta(d) \{ \text{for } i := 0 \text{ to } d \text{ do} \] \[ A[i] := \text{Nil} \] \[ \text{time} \] \[ \text{for each root } r \text{ of } H \text{ do begin} \] \[ s := A[\text{Degree}[r]] \] \[ \text{while } s \neq \text{Nil} \text{ do begin} \] \[ \text{if Key}[r] > \text{Key}[s] \text{ then} \] \[ \text{Swap } r, s. \] \[ \text{Remove } s \text{ from the root list of } H. \] \[ \text{Make } s \text{ a child of } r. \] \[ A[\text{Degree}[s]] := \text{Nil} \] \[ \text{Degree}[r] := 1 \] \[ \text{Mark}[s] := \text{False} \] \[ s := A[\text{Degree}[r]] \] \[ \text{end} \] \[ A[\text{Degree}[r]] := r \] \[ \text{end} \] \[ \Theta(d) \{ \text{Scan } A[0:d] \text{ to collect the new root list of } H, \] \[ \text{and update the minimum root pointer.} \] \[ \text{time} \] Analysis of Extract - We measure the time for an Extract by counting the number of times roots are linked, and the number of times an array element is accessed. Thus the actual time is at most: \[ \frac{f(n)}{ \text{concatenate the list of children of the extracted node onto the root list}} + \frac{f(n) + R(H) - 1}{\text{links the roots during consolidation}} + \frac{f(n) + 1}{\text{initialize the array}} + \frac{f(n) + 1}{\text{scan the array to collect the final root list}} = 4f(n) + R(H) + 1. \] - The change in potential \( \Delta \Phi \) is at most: \[ \left( \frac{f(n) + 1 + 2M(H)}{\text{upper bound on final number of roots}} \right) - \left( \frac{R(H) + 2M(H)}{\text{no new nodes are marked}} \right) = \Phi(D') - \Phi(D) = f(n) - R(H) + 1. \] Analysis of Extract, cont'd - So the amortized time for an Extract is at most: \[ \frac{4f(n) + R(H) + 1}{\text{actual time}} + \frac{f(n) - R(H) + 1}{\text{change in potential}} \] \[ = 5f(n) + 2 \] \[ = O(f(n)) \] where \( f(n) \) is an upper bound on the maximum degree in an \( n \)-node Fibonacci heap. - Intuitively, the time spent linking roots during an Extract is compensated by the reduction in the number of roots, as captured by the potential function. Decrease \((p, k, H)\) **Idea** - Decrease the key of node \(p\). Let \(q\) be its parent. If \(p\)'s key is now less than \(q\)'s key (so heap order is violated), cut the link from \(p\) to \(q\) and make \(p\) a new root. - If this causes \(q\) to have lost two children since the time \(q\) was linked to its parent, cut the link from \(q\) to its parent and make \(q\) a new root. Continue this test at \(q\)'s parent. (This is called a cascading cut.) Idea of Decrease, cont. - Pictorially, \[ H \rightarrow H' \] Cascading cut Lost 2 children: cut. Lost 2 children: cut. Heap order violated: cut. Several new roots. \[ p \ q \ q' \rightarrow \] Idea of Decrease, cont'd - We detect whether a node has lost 2 children by its mark: - When \( q \) is linked to its parent, we set \( \text{Mark}[q] := \text{False} \). (Examine procedure \text{Consolidate}.) - When \( q \) loses a child, and \( \text{Mark}[q] = \text{False} \), we set \( \text{Mark}[q] := \text{True} \). - When \( q \) loses a child, and \( \text{Mark}[q] = \text{True} \), we continue cascading to \( q \)'s parent. Implementation of Decrease procedure Decrease(p, k, H) begin Key[p] := min{Key[p], k} q := Parent[p] if q ≠ Nil and Key[p] < Key[q] then begin Cut(p, H) Cascade(q, H) end Update the minimum root pointer for H by comparing with Key[p]. end procedure Cut(p, H) begin q := Parent[p] Remove p from q's child list. Degree[q] := 1 Parent[p] := Nil Add p to the root list of H. Mark[p] := False end procedure Cascade (p, H) begin • Node p has just lost a child. Consider a cascading cut at p. while Parent[p] ≠ Nil and Mark[p] do begin q := Parent[p] Cut (p, H) p := q end if Parent[p] ≠ Nil then Mark[p] := True • Note that Mark[p] = False. end Analysis of Decrease • We measure the actual time for Decrease by: \[ \frac{1}{\text{time outside Cut and Cascade}} + \frac{c}{\text{number of calls to Cut (including those in Cascade)}} \] since the time taken by Decrease is \(\Theta(c+1)\). • We bound the change in potential as follows. Each call to Cut creates one new root. Moreover, each call to Cut with Cascade unmarks a marked node. In addition, the call to Cascade may mark a new node. Thus, \[ \Delta \Phi = \Delta R + 2 \Delta M \\ \leq c + 2(-c+1) + 1 \\ \leq 4 - c. \] Analysis of Decrease, cont'd • Thus the amortized time for Decrease is at most: \[ \frac{(1 + c)}{\text{actual time}} + \frac{(4 - c)}{\text{change in potential}} = 5. \] So Decrease takes \(O(1)\) amortized time. • Intuitively, the factor of two in the \(2M(H)\) term in \(\Phi(H)\) is needed so that, when unmarking a node, • one unit pays for the cut, and • the other unit pays for the addition of a new root to the potential. Delete \((p, H)\) - We simply perform: - \(\text{Decrease } (p, -\infty, H)\), followed by - \(\text{Extract } (H)\). - By our analysis of \(\text{Decrease}\) and \(\text{Extract}\), this takes \[ O(1) + O(f(n)) = O(f(n)) \] amortized time, where \(f(n)\) is again our upper bound on the maximum degree in an \(n\)-node Fibonacci heap. Bounding \( f(n) \), cont'd - We now turn to examining Fibonacci heaps with Decrease and Delete. **Definition 2** The Fibonacci tree \( T_k \), for \( k \geq 0 \), is defined inductively by: \[ T_k \begin{cases} 0, & k = 0 \\ \cup, & k = 1 \\ T_{k-1} \cup T_{k-2}, & k \geq 2. \end{cases} \] **Example** <table> <thead> <tr> <th>Degree of root</th> <th>0</th> <th>1</th> <th>2</th> <th>3</th> <th>4</th> <th>5</th> <th>...</th> </tr> </thead> <tbody> <tr> <td>Size of tree</td> <td>1</td> <td>2</td> <td>3</td> <td>5</td> <td>8</td> <td>13</td> <td>...</td> </tr> </tbody> </table> Bounding $f(n)$, cont'd Definition 3 The Fibonacci number $F_k$, for $k \geq 0$, is defined inductively by: $$F_k := \begin{cases} 0, & k = 0; \\ 1, & k = 1; \\ F_{k-2} + F_{k-1}, & k \geq 2. \end{cases}$$ Example <table> <thead> <tr> <th>$F_0$</th> <th>$F_1$</th> <th>$F_2$</th> <th>$F_3$</th> <th>$F_4$</th> <th>$F_5$</th> <th>$F_6$</th> <th>$F_7$</th> <th>$\ldots$</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>1</td> <td>1</td> <td>2</td> <td>3</td> <td>5</td> <td>8</td> <td>13</td> <td>$\ldots$</td> </tr> </tbody> </table> Lemma 3 The Fibonacci tree $T_k$ has - maximum degree $k$, which is the degree of its root, and - size $F_{k+2}$. Proof By induction on $k$. Basis ($k = 0, 1$) \[ \begin{align*} T_0 &= 0 & \text{max. deg.} & 0 & \text{size} & 1 = F_2 \\ T_1 &= 1 & 1 & 2 = F_3 \end{align*} \] Thus the basis holds. Induction ($k \geq 2$) By induction, the maximum degree of $T_k$ is the degree of the root of $T_k$, which is $1 + (k-1) = k$. By induction, the size of $T_k$ is \[ F_{(k-2)+2} + F_{(k-1)+2} = F_k + F_{k+1} = F_{k+2}. \] \[\square\] Lemma 4: For all \( k \geq 0 \), \[ F_k = \frac{1}{\sqrt{5}} \left( \phi^k - \hat{\phi}^k \right), \] where \[ \phi = \frac{1}{2} \left( 1 + \sqrt{5} \right) > 1.618, \] \[ \hat{\phi} = \frac{1}{2} \left( 1 - \sqrt{5} \right) < -0.618. \] Proof: By induction on \( k \). Basis \((k = 0, 1)\) \[ \frac{1}{\sqrt{5}} \left( \phi^0 - \hat{\phi}^0 \right) = 0 = F_0. \] \[ \frac{1}{\sqrt{5}} \left( \phi^1 - \hat{\phi}^1 \right) = \frac{1}{\sqrt{5}} \left( \frac{2 \sqrt{5}}{2} \right) = 1 = F_1. \] Induction \((k > 2)\) First notice that \( \phi \) and \( \hat{\phi} \) satisfy \[ 1 + \phi = \phi^2, \tag{\text{*}} \] \[ 1 + \hat{\phi} = \hat{\phi}^2. \] So, \[ F_k = F_{k-2} + F_{k-1} \] \[ = \frac{1}{\sqrt{5}} \left( \phi^{k-2} - \hat{\phi}^{k-2} \right) + \frac{1}{\sqrt{5}} \left( \phi^{k-1} - \hat{\phi}^{k-1} \right) \] \[ = \frac{1}{\sqrt{5}} \left( \phi^{k-2} \left( 1 + \phi \right) - \hat{\phi}^{k-2} \left( 1 + \hat{\phi} \right) \right) \] \[ = \frac{1}{\sqrt{5}} \left( \phi^k - \hat{\phi}^k \right), \quad \text{by (\text{*})} \] Bounding $f(n)$, cont'd Corollary 3 A Fibonacci tree on $n$ nodes has maximum degree $\Theta(\log n)$. Proof By Lemma 4, for all $k \geq 0$, $$\frac{1}{\sqrt{5}} (\varphi^k - |\hat{\varphi}|^k) \leq F_k \leq \frac{1}{\sqrt{5}} (\varphi^k + |\hat{\varphi}|^k).$$ Since $|\hat{\varphi}|^k \leq 1$ for all $k \geq 0$, $$\frac{1}{\sqrt{5}} (\varphi^k - 1) \leq F_k \leq \frac{1}{\sqrt{5}} (\varphi^k + 1).$$ Since $\frac{2}{\sqrt{5}} \varphi^k \geq 1$ for all $k \geq 2$, $$\frac{3}{5 \sqrt{5}} \varphi^k \leq F_k \leq \frac{7}{5 \sqrt{5}} \varphi^k.$$ Hence $$F_k = \Theta(\varphi^k).$$ Taking logarithms and noting $\varphi^k = \omega(1)$, $$\log \varphi F_k = \Theta(k). \quad (*)$$ Let $d$ be the maximum degree of a Fibonacci tree $T$. By Lemma 3, $T$ has $n = F_{d+2}$ nodes. Thus by ($*$), $$\log n = \Theta(d+2) = \Theta(d).$$ Hence $$d = \Theta(\log n).$$ Bounding $f(n)$, cont'd - We need one more fact about Fibonacci trees. **Lemma 5** For $k \geq 2$, Fibonacci tree $T_k$ has the structure, \[ T_k = \begin{array}{c} \uparrow \\ T_{k-2} \quad T_1 \quad T_0 \quad T_0 \end{array} \] **Proof** By induction on $k$. **Basis** ($k = 2$) \[ T_2 = \begin{array}{c} \uparrow \\ T_0 \quad T_1 \end{array} = \begin{array}{c} \uparrow \\ T_0 \quad T_0 \end{array} \] **Induction** ($k > 2$) \[ T_k = \begin{array}{c} \uparrow \\ T_{k-2} \quad T_{k-1} \end{array} = \begin{array}{c} \uparrow \\ T_{k-2} \quad T_{k-3} \quad T_0 \quad T_0 \quad T_0 \end{array} \] by def'n \hspace{1cm} \text{by ind. hyp.} Bounding $f(n)$, cont’d - We are now ready to relate Fibonacci heaps to Fibonacci trees. Lemma 6 In a Fibonacci heap, the smallest possible subtree rooted at a node of degree $k$ is the Fibonacci tree $T_k$. Proof. By induction on $k$. Basis ($k = 0, 1$) Note that $T_0$ and $T_1$ are the smallest possible trees with roots of degree 0 and 1, and that they can be formed by Fibonacci heap operations. Induction ($k > 2$) Let $v$ be a node of degree $k$ in a Fibonacci heap, and number its children $w_1, w_2, \ldots, w_k$ in the order they were linked to $v$: ``` v / \ \ w_1 \ / \ / / / \ w_2 w_i w_k ``` We claim that the degree of $w_i$, for $1 \leq i \leq k$, is at least $\max \{ i-2, 0 \}$. To see this, note that when $w_i$ was linked to $v$, they had the same degree. At that point, $v$ had degree at least $i-1$. Hence, when linked to $v$, $w_i$ had degree at least $i-1$. Since the link, $w_i$ can have lost at most one child. (Otherwise, $w_i$ could not be a child of $v$.) Thus $w_i$ has degree at least $i-2$. The smallest possible subtree at $v$ must consist of smallest possible subtrees at $w_1, w_2, \ldots, w_k$. From the claim, these are $T_0, T_0, T_1, \ldots, T_{k-2}$ by induction: ![Diagram](attachment:image.png) By Lemma 5, this tree is $T_k$. Finally, observe that $T_k$ can be built by a series of Fibonacci heap operations. (Exercise.) **Theorem** In a Fibonacci heap on \( n \) nodes, the maximum degree, \( f(n) \), is \( O(\log n) \). **Proof** Given a Fibonacci heap \( H \) with \( n \) nodes, let - \( k \) be its maximum degree, and - \( m \) be the size of the smallest subtree of \( H \) rooted at a node of degree \( k \). We have, \[ \begin{align*} k &= O(\log |T_k|) & \text{by Lemma 3, Corollary 3,} \\ &= O(\log m) & \text{by Lemma 6,} \\ &= O(\log n) & \text{since } m \leq n. \end{align*} \] **Remark** Tracing the details of the proof, we get \[ f(n) \leq \left\lfloor \log \frac{5n^{5/3}}{3} \right\rfloor - 2. \] Corollary On a Fibonacci heap of \( n \) nodes, - Delete, Extract take \( O(\log n) \) amortized time, and - Create, Insert, Union, Minimum, Decrease take \( O(1) \) amortized time.
{"Source-Url": "https://www2.cs.arizona.edu/classes/cs545/spring21/fibonacci-heaps.pdf", "len_cl100k_base": 6112, "olmocr-version": "0.1.50", "pdf-total-pages": 30, "total-fallback-pages": 0, "total-input-tokens": 35400, "total-output-tokens": 7459, "length": "2e12", "weborganizer": {"__label__adult": 0.0004012584686279297, "__label__art_design": 0.0003962516784667969, "__label__crime_law": 0.0005459785461425781, "__label__education_jobs": 0.00103759765625, "__label__entertainment": 0.00011914968490600586, "__label__fashion_beauty": 0.000217437744140625, "__label__finance_business": 0.0003638267517089844, "__label__food_dining": 0.0006475448608398438, "__label__games": 0.0012769699096679688, "__label__hardware": 0.0026073455810546875, "__label__health": 0.0014047622680664062, "__label__history": 0.00048065185546875, "__label__home_hobbies": 0.0002911090850830078, "__label__industrial": 0.0009756088256835938, "__label__literature": 0.0004684925079345703, "__label__politics": 0.0003495216369628906, "__label__religion": 0.0008015632629394531, "__label__science_tech": 0.344970703125, "__label__social_life": 0.0001341104507446289, "__label__software": 0.00897979736328125, "__label__software_dev": 0.6318359375, "__label__sports_fitness": 0.000568389892578125, "__label__transportation": 0.000911712646484375, "__label__travel": 0.0003044605255126953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16670, 0.01703]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16670, 0.61215]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16670, 0.78504]], "google_gemma-3-12b-it_contains_pii": [[0, 684, false], [684, 1111, null], [1111, 1924, null], [1924, 2520, null], [2520, 3086, null], [3086, 3561, null], [3561, 4282, null], [4282, 4703, null], [4703, 5112, null], [5112, 6017, null], [6017, 6780, null], [6780, 7251, null], [7251, 7719, null], [7719, 7919, null], [7919, 8368, null], [8368, 8822, null], [8822, 9103, null], [9103, 9641, null], [9641, 10080, null], [10080, 10424, null], [10424, 10885, null], [10885, 11339, null], [11339, 11882, null], [11882, 12941, null], [12941, 13819, null], [13819, 14469, null], [14469, 15101, null], [15101, 15865, null], [15865, 16488, null], [16488, 16670, null]], "google_gemma-3-12b-it_is_public_document": [[0, 684, true], [684, 1111, null], [1111, 1924, null], [1924, 2520, null], [2520, 3086, null], [3086, 3561, null], [3561, 4282, null], [4282, 4703, null], [4703, 5112, null], [5112, 6017, null], [6017, 6780, null], [6780, 7251, null], [7251, 7719, null], [7719, 7919, null], [7919, 8368, null], [8368, 8822, null], [8822, 9103, null], [9103, 9641, null], [9641, 10080, null], [10080, 10424, null], [10424, 10885, null], [10885, 11339, null], [11339, 11882, null], [11882, 12941, null], [12941, 13819, null], [13819, 14469, null], [14469, 15101, null], [15101, 15865, null], [15865, 16488, null], [16488, 16670, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 16670, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 16670, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16670, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16670, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 16670, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16670, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16670, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16670, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 16670, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 16670, null]], "pdf_page_numbers": [[0, 684, 1], [684, 1111, 2], [1111, 1924, 3], [1924, 2520, 4], [2520, 3086, 5], [3086, 3561, 6], [3561, 4282, 7], [4282, 4703, 8], [4703, 5112, 9], [5112, 6017, 10], [6017, 6780, 11], [6780, 7251, 12], [7251, 7719, 13], [7719, 7919, 14], [7919, 8368, 15], [8368, 8822, 16], [8822, 9103, 17], [9103, 9641, 18], [9641, 10080, 19], [10080, 10424, 20], [10424, 10885, 21], [10885, 11339, 22], [11339, 11882, 23], [11882, 12941, 24], [12941, 13819, 25], [13819, 14469, 26], [14469, 15101, 27], [15101, 15865, 28], [15865, 16488, 29], [16488, 16670, 30]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16670, 0.03788]]}
olmocr_science_pdfs
2024-12-02
2024-12-02
88b3324841040954bc2a7c4e2a1869096ff6b1b3
Processing the Evolution of Quality Requirements of Web Service Orchestrations: A Pattern-Based Approach Tarek Zernadji, Chouki Tibermacine, Foudil Cherif To cite this version: HAL Id: lirmm-00977367 https://hal-lirmm.ccsd.cnrs.fr/lirmm-00977367v2 Submitted on 16 Jan 2015 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Processing the Evolution of Quality Requirements of Web Service Orchestrations: a Pattern-based Approach Abstract—Currently Web services remain one of the leading technologies for implementing components of service-oriented software architectures. One of the most frequent forms of compositions of these entities is Web service orchestration. As any other software artifact, such service compositions are subject to an unescapable evolution (Lehman’s first law of software evolution). Either for answering new user requirements, for adapting, for correcting or for enhancing the provided functionality or quality, an architect has to conduct some evolutions on the design of these artifacts. In this paper, we present a method which aims at helping software architects of Web service orchestrations in processing an evolution of quality requirements. This method introduces a template for describing quality evolution “intents”. It then analyzes these intents and assists the architects in answering them by proposing some patterns. We consider in our work the postulate stating that quality can be implemented through patterns, which are specified with checkable/processable languages. Besides this, the method that we propose simulates the application of these patterns and notifies the architect with its consequences on the other implemented qualities. I. INTRODUCTION: CONTEXT AND MOTIVATION Service-oriented architectures (SOA) provide regardless of particular implementation technologies, an architectural style for building service-based software systems. This style supposes the existence of a collection of reusable services, which deliver some functionality for client applications. One possible and quite frequent form of this kind of software architectures is Web service orchestration. BPEL processes are one of the most largely used technologies for making these orchestrations executable. As any other software artifact, these software architectures must evolve during the system’s life cycle (Lehman’s 1st law [1]), and undergo changes that can harm the qualities originally planned by architects (Lehman’s 7th law). One of the major causes of this alteration of quality is related to the phenomenon of “knowledge vaporization” [2], which is due to the fact that most decisions made during the construction of software architectures remain implicit (undocumented). The lack of information about previously made decisions can lead architects to accidentally affect them and consequently alter the qualities they implement. In a previous work [3], we have proposed an approach to address the problem of “knowledge vaporization” by documenting major design decisions and their rationale (quality requirements) and use such documentation for supervising the architectural evolution of Web service orchestrations. In this paper, we propose a process that provides a systematic assistance to architects during the evolution of quality requirements of Web service orchestrations. Before starting the process, the architect should first make an architecture diagnosis and gather some information that feed the first step. We propose a template for describing quality evolution “intents” in order to enable the specification of this information (Section II-A). Then, such intent descriptions are processed (Section II-B) in order to propose to the architect some patterns for helping her/him to take design decisions. We argue in this work that for answering quality evolution intents, an architect can have as a design decision the selection of an SOA pattern. Our process is thus based on a documentation of design decisions as SOA patterns and their rationale as the quality attributes they implement. This process aims more precisely at helping an architect in choosing the well suited pattern to apply on her/his architecture. It uses a set of evaluation criteria and a quality impact analysis for that purpose (Section II-D). The architect is then assisted in a semi-automatic way to apply the selected pattern (Section II-C) thanks to reusable and customizable scripts defined using a scripting language for Web service orchestrations, named WS-BScript, which is introduced in this paper. The process ends by asking the architect to document the new design decisions (Section II-G) made into her/his architecture so that future evolutions can be assisted in the same way. Before concluding and presenting some perspectives to our work, we make an overview of the related work (Section III). II. PROPOSED APPROACH Through this process the architect is assisted to: i) make concrete changes leading to a new service orchestration, and ii) perform this with minimal negative effects on existing qualities. The process steps are detailed in the following sections. A. Evolution Intent Specification The architect should specify the needed information according to a template described in Table I. She/he provides in this template the quality attribute targeted by this evolution activity (i.e. the architect wants to implement in the orchestration). We adopt at the top level of our specification the ISO 9126\(^1\) quality model. We consider in our work quality characteristics mainly as “abstract” quality attributes and sub-characteristics as “concrete” quality attributes which are specializations of the first ones. Some ISO 9126 quality sub-characteristics like “security” are however still considered as “abstract” quality attributes for service-based systems. These sub-characteristics may have several specializations. Additionally, the architect should identify the architectural regions which are the main architectural elements (or sets of these elements) in the BPEL process concerned by the changes. Besides this, the architect has to indicate the evolution kind by indicating if she/he wants to add (a new), enhance (an existing), weaken, or withdraw (an existing) quality attribute. Additional information should be specified if the architect wants to withdraw or reduce a quality attribute. This is stated in the “Related Quality Attribute” section. For example, when the architect tries to remove “Authentication” for affecting (weakening or removing) “Security”, there is a final goal of enhancing “Performance”. In the other evolution kinds (add or enhance), this section is left empty. B. Evolution Intent Analysis The evolution intent specification is analyzed, and depending on the evolution kind indicated in this specification two cases are distinguished. These are detailed in the following subsections. The proposed process is based on an “SOA Patterns Catalog\(^2\)”, where each pattern is specified according to a specific structure shown in Table II. 1) Processing the Evolution by Adding or Replacing a Pattern: In this case, the architect wants to enhance (replace the existing pattern implementing the quality attribute by applying one or several other patterns) or add a new quality attribute (apply a new pattern) to the orchestration. The patterns catalog is automatically analyzed and a collection of patterns related to the targeted quality is identified\(^2\) and proposed to the architect. As depicted in Table II, the pattern’s specification includes a “name” with a simple description of its role, the “quality attribute” the pattern implements, an “architectural script” which describes the way it should be applied in the orchestration, and finally “architectural constraints” which are formal specifications of a pattern and allow the checking of its presence or absence in the orchestration. Table I. Template for Quality Evolution Intent Description <table> <thead> <tr> <th>Evolution Quality Attribute (What?)</th> <th>State the quality attribute targeted by the evolution activity.</th> </tr> </thead> <tbody> <tr> <td>Evolution Kind (How?)</td> <td>State if the evolution targets to add, enhance, weaken or withdraw the quality attribute.</td> </tr> <tr> <td>Related Quality Attribute (Ultimately what?)</td> <td>If the evolution kind is withdrawing or weakening the quality attribute, state here the quality attribute which will be ultimately enhanced or added (left empty otherwise).</td> </tr> <tr> <td>Architectural Regions (Where?)</td> <td>Indicate where in the orchestration changes will occur.</td> </tr> </tbody> </table> Table II. Pattern Structure Specification <table> <thead> <tr> <th>Pattern Name</th> <th>The identifier of the pattern and a simple textual description of its role.</th> </tr> </thead> <tbody> <tr> <td>Quality Attribute</td> <td>The ISO 9126 quality characteristic or sub-characteristic that is implemented by the pattern (or concrete quality attributes).</td> </tr> <tr> <td>Architectural Script</td> <td>The set of parameterized actions that indicate the way the pattern can be applied on the architecture. Actions are formalized using a scripting language for Web service orchestration reconfiguration.</td> </tr> <tr> <td>Architectural Constraints</td> <td>The list of parameterized constraints that enable to check if the orchestration is compliant with the pattern.</td> </tr> </tbody> </table> 2) Processing the Evolution by Removing a Pattern: No patterns are proposed here, rather a cancellation of the pattern implementing the quality attribute is performed. This cancellation is automatically obtained from the scripts for a pattern application. C. Pattern Application This is an important step in the process where the selected SOA patterns are applied on a targeted Web service orchestration by means of some scripts, which specify simple architectural changes expressed with a Web service orchestration scripting language called “WS-BScript”. WS-BScript is a lightweight DSL that enables the patterns catalog administrator, whose responsibility is to feed the patterns catalog, to specify primitive changes making possible the reconfiguration of Web service orchestrations. The idea behind WS-BScript is to formalize some SOA patterns in order to apply them as much automatically as possible in the form of reusable design decisions. This language allows the definition of parameterized “scripts”. A script is composed of a set of actions like add, wire, and remove, among others\(^3\). A script declares a set of parameters (BPEL orchestration elements), which represent the scope of the architectural actions. In this step of the process, the architect will apply one or several predefined\(^4\) scripts (issued from the catalog of patterns) on her/his orchestration. For this end, the architect has to configure the scripts she/he wants to apply by initializing their parameters first and then by customizing them on the fly (through ask actions). D. Quality Impact Analysis There are two key elements that are used in the Quality Impact Analysis step of the process: i) the use of a Multi-Criteria Decision Making (MCDM) method, named “WSM” \(^4\) (Weighted Sum Model), to evaluate a number of SOA pattern alternatives and determine the one that best satisfies the architect in a quality requirement evolution step, and ii) the solicitation of a quality-oriented assistance service that helps in diagnosing the consequences of any applied pattern on the other implemented qualities. For the first element, the MCDM problem we want to solve can be expressed as following: “what is the pattern that impacts the less the most important quality attributes, having the best degree of satisfaction for the targeted quality attribute, and is the most suitable to the architect preferences (context \(^2\)A quality attribute may be implemented by applying several patterns in different ways. \(^3\)The complete specification can be found here: https://sites.google.com/site/wsbscript/wsbscript-specification \(^4\)The patterns scripts are already specified in the patterns catalog, the architect has just to apply them. suitability, e.g., price, applicability related conditions, etc.)?" We have formulated the MCDM problem as follows: - Alternatives are some selected patterns we want to classify; - Decision criteria are defined as follows: 1) Criticality of the impacted quality attribute ($C_1$); 2) Satisfaction degree of a pattern for a quality attribute ($C_2$); 3) Context-Suitability of the pattern ($C_3$). For our evaluation purpose using the “WSM” method, we chose to normalize the aforementioned criteria according to the scale proposed in [5]. The later gives eleven scores ranging from 0.045..0.665, 0.745, to 0.955 and their corresponding linguistic terms from “Exceptionally low”, “High”, “Very high”, to “Exceptionally high”. This normalization allows us to deal with a single-dimensional case (all the units are the same) of the MCDM problem which fits well the use of “WSM” method. If there are $M$ alternatives and $N$ criteria, then the best alternative (pattern) is the one that satisfies (in the maximization case) the following formula [4]: $$A_i^{WSM} = \max_1^N \sum_{j=1}^N a_{ij}w_j, \text{ for } i = 1, 2, 3, ..., M. \quad (1)$$ Weights $w_j$ represent the importance of each criterion according to the architect’s preferences in the evolution process (also normalized according to the scale). $a_{ij}$ is the value of an alternative “$i$” (pattern) in terms of a decision criterion “$j$”. We note here that the patterns in the catalog are previously documented by the architect according to the model proposed in [3]. This model introduces some fine-grained information namely, the criticality degree ($a_{iC_1}$) of a quality attribute, the formalization degree, and the satisfaction degree ($a_{iC_2}$). The documentation is enriched with a context-suitability degree ($a_{iC_3}$), which is specified and documented at evolution time because it depends on the pattern’s suitability to a given situation and to the orchestration. This degree cannot be reused in different orchestrations. It can however be reused in the future evolutions of the same orchestration. The second element of the quality-related impact analysis step is an assistance service which aims to notify the architect of the consequences of the applied pattern on the other qualities. It indicates what are the related qualities that may be altered when applying the pattern which implements the new quality attribute. This assistance is mainly based on the evaluation of some OCL-like constraints that we used to specify parameterized architecture constraints [6] for Web service orchestrations. These constraints are defined using OCL and navigate in a metamodel of BPEL. They serve to verify if an architecture conforms to the pattern or not. E. New Patterns Definition It is on the responsibility of the architect to validate its choice of a specific pattern or to reject it. If the architect is not satisfied with any of the proposed patterns, then she/he can define new patterns (specialization of existing patterns, for example), which she/he is asked to document according to the proposed structure (Table II). They will be considered as new reusable architecture design decisions that could potentially be applied on some architecture descriptions in the future. After that, the architect is redirected to the “Patterns Application” step to simulate the effect of the new catalogued pattern. F. Pattern Cancellation The architect may want to enhance a quality attribute not by adding a new pattern that implements the quality, or by replacing an existing pattern by another one which implements better the quality, but by eliminating or weakening a given quality attribute. In this case, the process execution takes another path. Thus, if the specified kind in the evolution intent is to withdraw or weaken a quality attribute, the process goes through the pattern cancellation step where an elimination of the concerned pattern is performed. This is done by deducing the opposite effect of the pattern’s architectural actions, hence avoiding to the architect the burden of doing it manually or specifying the cancellation script. The generated cancellation script is then executed on the Web service orchestration. The generation of a cancellation script is handled automatically (by the “WS-BScript” interpreter) following a bottom-up approach starting by the last action in the script and going up to the first one, by respecting some specific rules. G. Documentation of the New Architecture In this step, the chosen pattern is applied to the orchestration and added in the architecture decision documentation as a new design decision. This documentation contains all design decisions (SOA patterns) that was made to build the architecture. In addition, the architect has to complete a part of this documentation, namely the criticality degree of the quality attribute the pattern implements, the satisfaction degree of the pattern for the quality attribute, the formalization degree of the pattern, and also the related qualities of the quality attribute. This information is necessary for the evolution assistance especially in the patterns selection process (quality impact analysis step). III. RELATED WORK Many works have been proposed in the literature to address quality requirements integration in software architectures. Al-naeem et al [7] proposed “ArchDesigner”, which use optimization techniques to determine optimal combination of design alternatives. We use a simulation and feedback technique at the evolution stage to help architects in the decision selection process to meet their quality goals. Architectural design decisions in our work are SOA Patterns which are applied in semi-automatic way, while in their work they are high level architecture design decisions (the choice of Java EE, for example). In [8], [9] the authors use reusable design decisions namely attribute primitives and architectural tactics, we use SOA patterns. However, they focus on the design stage, while we focus on the evolution stage. In addition, we give support to the architect to choose among several possible alternatives of a design decision the one that satisfies the best a given --- 5A complete list of these rules can be found in: https://sites.google.com/site/wsbscript/ws-bscript-cancellation-rules quality goal. Besides this, we help the architect in applying the selected design decision in a semi-automatic fashion, and we give her/him assistance to make impact analysis. In [10], [11] the authors use a Patterns catalog to document patterns as identified design decisions. However, their work differs in the way pattern selection and validation is performed. Indeed, in [10] they use questions to help architects in choosing and validating patterns, whereas, we use an MCDM method in a complementary way with a quality-related impact analysis to select and validate patterns. Additionally, our process offers a support to integrate patterns in a semi-automatic way. In [12], similarly to our work they mapped some quality attributes addressed by SOA patterns [13] (that could not be related to any quality attribute in the S-Cube Quality Reference Model (QRM) [14]) to quality attributes from the ISO 9126 quality model. Their work is complementary to our work and could be helpful to the architect especially while building the patterns catalog. It could be used to deal with mapping between patterns and the quality attributes they impact as well as filtering only patterns having impact from those without impact on quality attributes. Harrison et al [15] investigated as in [12] a quantitative evaluation of the impact of some architectural patterns (Layers, Pipes and Filters, Blackboard...) from [16] on quality attributes. In our work, we identify automatically the impact through the solicitation of a quality-oriented assistance service that helps in diagnosing the consequences of any applied pattern on the other implemented qualities. In [17], the authors integrate quality requirements as usable information at a functional and runtime level, while our work is positioned at the architectural level and incorporates quality requirements as reusable solutions (SOA Patterns). This approach is complementary to our work, since our work deals with quality requirements as architectural design decisions which are used to generate designs encompassing quality requirements, and not as extra information which are exploited at a post-deployment time. In [18], the quality attributes and the high level architectural design decisions achieving them are identified manually. In our work, design decisions are identified and proposed to the architects in a catalog as patterns (for SOA). They used a decision graph transformation strategy to analyze a design decision impact, whereas, we simulate the application of a selected collection of patterns and assist the selection (MCDM method) of the most appropriate pattern (semi-automatically), then report its impact (automatically) to the architect. IV. CONCLUSION AND FUTURE WORK We argue in this paper that catalogs such as [19], [16], or [13] of design patterns can be documented in a (more or less) structured, automatically checkable and semi-automatically processable way. Such documentation is operated by a process that we specified in this paper, and whose main goal is to assist architects in processing the evolution of quality requirements by suggesting to them the “most” appropriate patterns: i) that respects the more the evolved quality attribute (the pattern that gives the best scores for the evaluation criteria), and ii) that affects the less the other quality requirements already satisfied and documented in the software architecture (through the use of the quality impact analysis). We deal in our work with a particular specialization of service-oriented software architectures, which are Web service orchestrations concretely defined as BPEL processes. As perspectives to our work, we would like to enhance the organization of the catalog of patterns. Instead of a flat organization, we want to define a hierarchical one, built using some classification techniques like FCA (Formal Concept Analysis [20]). In this way, we can easily look for substitutable patterns which can be proposed together to the architect in the process. Besides this, we plan to integrate in the proposed process an impact analysis activity on the business logic aspect, thus evaluate also the impact on the existing functionality implemented in the software architecture. REFERENCES
{"Source-Url": "https://hal-lirmm.ccsd.cnrs.fr/lirmm-00977367/document", "len_cl100k_base": 4590, "olmocr-version": "0.1.49", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 14709, "total-output-tokens": 5717, "length": "2e12", "weborganizer": {"__label__adult": 0.0003659725189208984, "__label__art_design": 0.0005593299865722656, "__label__crime_law": 0.0003147125244140625, "__label__education_jobs": 0.0005817413330078125, "__label__entertainment": 5.322694778442383e-05, "__label__fashion_beauty": 0.00014972686767578125, "__label__finance_business": 0.00022232532501220703, "__label__food_dining": 0.00033473968505859375, "__label__games": 0.00035858154296875, "__label__hardware": 0.0004987716674804688, "__label__health": 0.0004818439483642578, "__label__history": 0.0002067089080810547, "__label__home_hobbies": 6.395578384399414e-05, "__label__industrial": 0.0002994537353515625, "__label__literature": 0.00028324127197265625, "__label__politics": 0.0002551078796386719, "__label__religion": 0.0004382133483886719, "__label__science_tech": 0.00826263427734375, "__label__social_life": 8.338689804077148e-05, "__label__software": 0.004001617431640625, "__label__software_dev": 0.9814453125, "__label__sports_fitness": 0.000255584716796875, "__label__transportation": 0.00031065940856933594, "__label__travel": 0.00018584728240966797}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25908, 0.02421]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25908, 0.27292]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25908, 0.89463]], "google_gemma-3-12b-it_contains_pii": [[0, 1135, false], [1135, 6189, null], [6189, 13136, null], [13136, 19460, null], [19460, 25908, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1135, true], [1135, 6189, null], [6189, 13136, null], [13136, 19460, null], [19460, 25908, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25908, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25908, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25908, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25908, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25908, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25908, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25908, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25908, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25908, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25908, null]], "pdf_page_numbers": [[0, 1135, 1], [1135, 6189, 2], [6189, 13136, 3], [13136, 19460, 4], [19460, 25908, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25908, 0.1087]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
0f742a95e35f9c35491fe287502f348ecd1156d8
Faster than Pairing and Fibonacci Heaps? Rank-Relaxed Weak Queues Dr. Stefan Edelkamp TZI, Universität Bremen TZI-Bericht Nr. 54 2009 Abstract A run-relaxed weak queue by Elmasry et al. (2005) is a priority queue data structure with insert and decrease-key in $O(1)$ as well as delete and delete-min in $O(\log n)$ worst-case time. One further advantage is the small space consumption of $3n + O(\log n)$ pointers. In this paper we propose rank-relaxed weak queues, reducing the number of rank violations nodes for each level to a constant, while providing amortized constant time for decrease-key. Compared to run-relaxed weak queues, the new structure additionally gains one pointer per node. An empirical evaluation shows that the implementation can outperform Fibonacci and pairing heaps in practice even on rather simple data types. 1 Introduction Priority queues are among the most important non-trivial data structures and essential for many fundamental algorithms, like Dijkstra’s approach to compute shortest paths [3], or minimum spanning tree generation according to Kruskal’s algorithm [15]. For a comparison function operating on totally ordered keys, besides providing the dictionary operations insert and delete, priority queues feature extracting the minimum and decreasing the value of a key. The most prominent implementation of priority queues featured in many text books are Fibonacci heaps [12], which can be roughly characterized as lazy-join versions of binomial queues. They provide insert and decrease-key in $O(1)$ amortized, as well as delete and delete-min in $O(\log n)$ amortized. Run-relaxed weak queues as proposed in Elmasry et al. [9] are worst-case efficient priority queues, by means that all running times of Fibonacci heaps are worst-case instead of amortized. They have been derived from run-relaxed heaps [4], which have matching performance, but a rather involved and less space-efficient implementation. The core difference between the two is that the latter relies on binomial queues, while the former uses perfect weak-heaps, where weak-heaps [5] have been designed for efficient sorting. Compared to ordinary binary heaps, weak-heaps are less restrictive. A key only needs to be smaller than all keys in its right subtree. As the root node has no left subtree, it contains the minimal key. The efficiencies for sorting, worst and best case inputs, and the construction of a (double-ended) priority queue has been studied by [7]. In this paper we improve run-relaxed weak queues to rank-relaxed weak queues for better practical time and space performance by refining the data structure for storing and reducing potential heap-order violating nodes. The core result is that by sacrificing worst-case for amortized complexity at most 4 potential heap-order violating nodes are needed at each height. As the operation is not to be so important in applications this paper does not discuss an efficient meld of two rank-relaxed weak queues. As the structure for heap-order violation becomes simpler for rank-relaxed weak-queues compared to run-relaxed weak-queues we expect that a worst case running time of $O(\min\{\log m, \log n\})$ for two structures of $n$ and $m$ elements should be possible to achieve. Our experiments in a space-optimized implementation show that the efficiency of our implementation can be superior to the performance of Fibonacci and pairing heap priority queue implementations. Moreover, wrt. new developments of processor architectures to support leading zero bit counts, the efficiency might further rise. The price we pay wrt. the original implementation of run-relaxed weak-heaps is that decrease-key is no longer worst-case but amortized constant time. Our approach further shows that the space consumption of relaxed weak queues can be reduced. 2 Run-Relaxed Weak Queues Run-relaxed weak queues are binary tree variants of run-relaxed heaps [4], and reflect worst-case efficient priority queues (with constant-time efficiencies for insert and decrease-key and logarithmic time for delete and delete-min). Other structures achieving this performance are Brodal heaps [2] and fat heaps [14]. The fact that distinguishes run-relaxed weak queues from the others is that they are considerably easy to implement [19]. Weak-heaps [5] are obtained by relaxing the heap requirements. More precisely, a weak-heap satisfies the following three conditions: The root value of any subtree is smaller than or equal to all elements to its right (weak heap dominance property), the root of the entire structure has no left child (optimal root property), and leaf nodes are found on the last two levels only (heap balance property). In perfect weak-heaps, the right subtree of the root is a complete binary tree. Weak-heaps have a natural array embedding that utilizes so-called reverse bits \(r_i, i \in \{0, \ldots, n-1\}\). The index of the left child is located at \(2^i + r_i\) and the right child is found at \(2^i + 1 - r_i\). For this purpose \(r_i\) is interpreted as an integer in \(\{0, 1\}\), being initialized with value 0. By flipping, the bit the status of being a left and a right child is exchanged, an essential property to realize the join of two weak-heaps in constant time. As an example take \(a = [1, 4, 5, 2, 7, 5, 3, 8, 15, 11, 10, 13, 9, 12]\) and \(r = [0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1]\) as an array representation of a weak-heap. Its binary tree equivalent is shown in Fig. 1. Weak-heaps are state-of-the-art for sequential sorting. For \(l = \lceil \log n \rceil\), the worst-case number of comparisons of weak-heap sort [5] is \(nl - 2^l + n - 1 \leq n \log n + 0.09n\) [7]. An improvement sorts indexes in \(n \log n - 0.91n\) comparisons [6]. An array-based solution is not an option for our studies. One main reason is that it is difficult to efficiently meld two structures. Weak queues [9] contribute to the observation that perfect weak-heaps obey a one-to-one correspondence to heap-ordered binomial trees as featured in run-relaxed heaps (as well as in binomial queues, Fibonacci heaps, and others), and perfect weak-heaps (as featured in run-relaxed weak-queues). We observe that binomial tree ranks correspond to weak-heap heights. Rea- Figure 1: Example of a perfect weak-heap. Reflected nodes are shown in gray. call that a binomial tree $B_n$ is a tree of height $n$ with $2^n$ nodes in total and $\binom{n}{i}$ nodes in depth $i$. The structure of $B_n$ is found by unifying two structures $B_{n-1}$, where one is added as an additional successor to the second. As an unfortunate side effect, this increases the node branching factor considerably. Operations on perfect weak-heaps are slightly more flexible than on binomial trees. Moreover, binary trees provide a better space consumption, as only two links are necessary to cover the parent and successor relationship. A weak queue storing $n$ elements is a collection of disjoint perfect weak-heaps based on the binary representation of $n = \sum_{i=0}^{\lfloor \log n - 1 \rfloor} b_i 2^i$. In its basic form, a weak queue contains a perfect weak-heap $H_i$ of size $2^i$ if and only if $b_i = 1$. In run-relaxed weak-queues [9], the requirement of having exactly one perfect weak-heap of a given size is relaxed. An additional structure, called the heap store, maintains perfect weak-heaps of same height. At most two heaps per height suffice to efficiently realize injection and ejection of perfect weak-heaps. To meet the worst-case complexity bounds, the join of two perfect weak-heaps of the same height is delayed, while maintaining the following structural property on the sequence of numbers of perfect weak heaps of the same height. The height sequence $(r_0, \ldots, r_k) \in \{0, 1, 2\}^{k+1}$ is regular, if any digit 2 is preceded by a digit 0, possibly having some digits 1 in between. A subsequence of the form $(01^*2)$ is called a block. That is, every digit 2 must be part of a block, but there can be digits, 0s and 1s, that are not part of a block. For example, the height sequence $(1011202012)$ contains three blocks. After the injection of a perfect weak heap, we join the first two of the same size, if there are any. They are found by scanning the height sequence. To grant $O(1)$ access, a stack of pending joins, the join schedule implements the height sequence of pending joins. Then we insert the new weak-heap, while preserving the regularity of the height sequence. For ejection, the smallest weak heap is eliminated from the sequence and, if it forms a pair, the top of the join schedule is also removed. The heap store can be implemented as a singly-linked list where each node, if it is (the first of) a 2, has a jump pointer to the next 2. This implementation is proposed in [1]. Resolving weak-heap order violations is delayed. The primary purpose of a node store is to keep track and reduce the number of potential violation nodes at which the key may be smaller than the key of the (binomial tree) parent. A node that is a potential violation node is said to be marked. A marked node is tough if it is the left child of its parent and also the parent is marked. A chain of consecutive tough nodes followed by a single non-tough marked node is called a run. All tough nodes of a run are called its members; the single non-tough marked node of that run is called its leader. A marked node that is neither a member nor a leader of a run is called a singleton. To summarize, we can divide the set of all nodes into four disjoint type categories: unmarked nodes, run members, run leaders, and singletons. A pair $(\text{type}, \text{height})$ with type being either unmarked, member, leader, or singleton and height being a value in \{0, 1, \ldots, \lfloor \log n \rfloor - 1\} denotes the state of a node, where the height of a node $r$ is the height of the subtree rooted at $r$. Transformations induce a constant number of state transitions. A simple example of such a transformation is a join, where the height of the new root must be increased by one. Other operations (see Fig. 2) are cleaning, parent, sibling and pair transformations. A cleaning transformation rotates a marked left child to a marked right one, provided its neighbor and parent are unmarked. A parent transformation reduces the number of marked nodes or pushes the marking one level up. A sibling transformation reduces the markings by eliminating two markings in one level, while generating a new marking one level up. A pair transformation has a similar effect, but also operates on disconnected trees. These four primitive transformations are combined to a singleton or run transformation. We briefly recall the two transformations from [9] as their application is crucial for our approach. In a singleton transformation, we assume that two marked nodes $q$ and $s$ do not have the same parent and that they are of the same height. Furthermore, we assume that $q$ and $s$ are the right children of their respective parents $p$ and $r$, which both are unmarked. Figure 2: Primitives used in a λ-reduction: a) cleaning transformation, b) parent transformation, c) sibling transformation, and d) pair transformation. transformation involves three steps. First, the subheaps rooted at \( p \) and \( r \) are split. Second, the produced subheaps rooted at \( p \) and \( r \) are joined and the resulting subheap is put in the place of the subheap originally rooted at \( p \) or \( r \), depending on which becomes the root of the resulting subheap. Third, the two remaining subheaps rooted at \( q \) and \( s \) are joined and the resulting subheap is put in the place of the subheap originally rooted at \( p \) or \( r \), depending on which is still unoccupied after the second step. If after the third step \( q \) or \( s \) becomes a root, the node is unmarked. By this transformation at least one marked node is eliminated. The purpose of a run transformation is to move the two top-most marked nodes of a run upwards and at the same time remove at least one marking. Assume now that \( q \) is the leader of a run taken from the leader-object list and that \( r \) is the first member of that run. There are two cases depending on the position of \( q \). In Case 1 \( q \) is a right child. Apply the parent transformation to \( q \). If the number of marked nodes decreased, stop. Now the parent of \( r \) is unmarked. If the sibling of \( r \) is marked, apply the sibling transformation to \( r \) and its sibling, and stop. Thereafter, apply the parent transformation once or twice to \( r \) to reduce the number of marked nodes. In Case 2 \( q \) is a left child. If the sibling of \( q \) is marked, apply the sibling transformation to \( q \) and its sibling, and stop. Otherwise, apply the cleaning transformation to \( q \), thereby making it a right child. Now the parent of \( r \) is unmarked. If the sibling of \( r \) is marked, apply the sibling transformation to \( r \) and its sibling, and stop. Otherwise, apply the cleaning transformation followed by the parent transformation to \( r \). Now \( q \) and \( r \) are marked siblings with an unmarked parent; apply the sibling transformation to them to reduce the number of marked nodes. The singleton transformation reduces the number of marking in a given level by 1, not generating a marking in the level above; or by 2, generating a marking in the level above. A similar statement is valid for run transformations, so that for both functions, the number of markings is reduced by at least 1 in constant amount of work. A \( \lambda \)-reduction is invoked once for each decrease-key and twice for each delete and delete-min operation. It calls either a singleton or a run transformation and bounds the number of marked nodes to at most \( \lfloor \log n \rfloor - 1 \). In an implementation one would need a list of run leaders, a list of singleton leaders, for each singleton team a list of its members, and an array of pointers to the beginning of each singleton team list. An implementation of run-relaxed weak queues is due to Rasmussen [19]. The code uses primitives of the standard template library STL. In the implementation the node store consists of different list items containing the type of the node marking, which can either be a fellow, a chairman, a leader, or a member of a run, where fellows and chairmen refine the concept of singletons. A fellow is a marked node, with an unmarked parent, if it is a left child. If more than one fellow has a certain height, one of them is elected as a chairman. The list of chairmen is required for a pair transformation. Nodes that are left children of a marked parent are members, while the parent of such runs is entitled the leader. The list of leaders is needed for a run transformation. An implementation of the \( \lambda \)-reduction routine that realizes the above case study with these two lists is shown in Fig. 3. As the pseudo code transparently refers to the transformation routines and not to the actual marking and unmarking procedures underneath (that are called on-the-fly), given the four primitives displaying in Fig.2, the complex case study should be easy to walk through. For additional information on the implementation we kindly refer the reader to the original description in [19]. 3 Rank-Relaxed Weak Queues *Rank-relaxed weak queues* improve the run-relaxed weak queues by *eager* \( \lambda \)-reductions; yielding a more efficient node store. Instead of executing at most one reduction at a time, we eliminate all leaders and chairmen in one operation, thus performing transformations until both lists are empty. In such an iterated reduction, all runs are destroyed and no more than two singletons remain. The modified implementation of procedure is shown in Fig. 4. The changes wrt. the implementation of Rasmussen in Fig. 3 are moderate. The most important change is the embedding of the original \( \lambda \) reduction in an additional loop (\textbf{while} \((\text{leaders} \cup \text{chairmen} \neq \emptyset)\)). Moreover, we have exchanged the order of singleton and run transformations, so that run transformations are preferred. Last, but not least a line that terminates a run transformation in case a singleton one becomes applicable. **Proposition 1** *The loop increases the worst-case time for reduce to \( O(\log n) \).* **Proof.** Eliminating all leaders and all singleton pairs may yield a ripple effect. As an example, consider that for each height we have already stored Procedure $\lambda$-Reduce if (leaders $\neq \emptyset$) ;; Leader exists on some level leader $\leftarrow$ leaders.first ; leaderparent $\leftarrow$ parent(leader) ;; Select leader and parent if (leader = leaderparent.right) ;; Leader is right child parenttrans(leaderparent) if (marked(leaderparent) $\land$ marked(leader)) ;; Parent also marked if (marked(leaderparent.left)) siblingtrans(leaderparent); return parenttrans(leaderparent) if (marked(leaderparent.right)) parenttrans(leader) else ;; Leader is left child sibling $\leftarrow$ leaderparent.right ;; Temporary variable if (marked(sibling)) siblingtrans(leaderparent); return cleaningtrans(leaderparent) if (marked(sibling.right)) siblingtrans(sibling); return cleaningtrans(sibling) parenttrans(sibling) if (marked(leaderparent.left)) siblingtrans(leaderparent) else if (chairmen $\neq \emptyset$) ;; Fellow pair on some level first $\leftarrow$ chairmen.first; firstparent $\leftarrow$ parent(first) if (firstparent.left = first and marked(firstparent.right) or ;; 2 children firstparent.left $\neq$ first and marked(firstparent.left)) ;; ... siblingtrans(firstparent); return second $\leftarrow$ chairmen.second; secondparent $\leftarrow$ parent(second) if (secondparent.left = second and marked(secondparent.right) or ;; 2 children secondparent.left $\neq$ second and marked(secondparent.left)) ;; marked siblingtrans(secondparent); return if (firstparent.left = first) cleaningtrans(firstparent) ;; Toggle children marking if (secondparent.left = second) cleaningtrans(secondparent) if (marked(firstparent) or root(firstparent)) ;; Parent also marked parenttrans(firstparent); return if (marked(secondparent) or root(secondparent)) ;; Parent also marked parenttrans(secondparent); return pairtrans(firstparent, secondparent) Figure 3: Reducing number of marked nodes in a run-relaxed weak-queue. Procedure Eager $\lambda$-Reduce while (leaders $\cup$ chairmen $\neq \emptyset$) ;; New loop if (chairmen $\neq \emptyset$) ;; New ordering: first singletons, then run members first $\leftarrow$ chairmen.first; firstparent $\leftarrow$ parent(first) if (firstparent.left = first and marked(firstparent.right) or firstparent.left $\neq$ first and marked(firstparent.left)) siblingtrans(firstparent); continue second $\leftarrow$ chairmen.second; secondparent $\leftarrow$ parent(second) if (secondparent.left = second and marked(secondparent.right) or secondparent.left $\neq$ second and marked(secondparent.left)) siblingtrans(secondparent); continue if (firstparent.left = first) cleaningtrans(firstparent) if (secondparent.left = second) cleaningtrans(secondparent) if (marked(firstparent) or root(firstparent)) parenttrans(firstparent); continue if (marked(secondparent) or root(secondparent)) parenttrans(secondparent); continue pairtrans(firstparent, secondparent) else if (leaders $\neq \emptyset$) leader $\leftarrow$ leaders.first; leaderparent $\leftarrow$ parent(leader) if (leader = leaderparent.right) parenttrans(leaderparent) if (marked(leaderparent) $\wedge$ marked(leader)) if (marked(leaderparent.left) siblingtrans(leaderparent); continue parenttrans(leaderparent) if (marked(leaderparent, right)) parenttrans(leader) else sibling $\leftarrow$ leaderparent.right if (marked(sibling)) siblingtrans(leaderparent); continue cleaningtrans(leaderparent) if (chairmen) continue ;; New case if (marked(sibling.right)) siblingtrans(sibling); continue cleaningtrans(sibling) parenttrans(sibling) if (marked(leaderparent.left)) siblingtrans(leaderparent) Figure 4: Reducing number of marked nodes in the rank-relaxed weak-queue. one singleton. Adding another singleton at height 0 we have to perform a transformation, such that its elimination introduces the generation of another one at height 1, and so on, until we reach the root node. As there are at most $O(\log n)$ marked nodes in the store, and each applicable reduction eliminates one marked node, the worst-case of at most $O(\log n)$ steps is immediate. $q.e.d.$ **Proposition 2** The amortized costs for eager $\lambda$-reductions is constant. **Proof.** The critical observation is that with each reduction that generates a new marking at a certain depth, it eliminates more than one with smaller height value. If we assign a account for the constant amount of work needed for applying one reduction with each insertion of an element to the node store, these saved efforts can be exploited to cover the work needed for iterating the $\lambda$-reduction. $q.e.d.$ **Proposition 3** At any given time, there are at most four marked nodes of the same height. **Proof.** By the preference of singleton to run reductions at the time of each run reduction we have at most one marked singleton at each height. The critical case is that a cleaning transformation of the leader at height $h$ to convert it to a left child, will disconnect it from its marked left child and can change it to a singleton, given that the left child of its destination is not marked, so that two singletons could appear in height $h + 1$. With the extra line in the code we participate from the fact that now a singleton transformation applies. As a result, at height $h + 1$ we grant space for a potential second fellow that is needed to finalize the transformation. All other cases ensure that at most one new marking is generated in height $h + 1$, or $h + 2$. Continuing with singleton transformations we satisfy the invariant that after executing reduce, we have no run, and at most one singleton for each height. Moreover, in between two such iterated reductions for each height, at most 2 nodes are stored as a singleton. Similarly, at most 2 nodes appear as a member of a run at any given height. $q.e.d.$ The major gain of our approach of eager $\lambda$-reductions is that we can limit the number of markings at a given height. An efficient implementation avoids lists of marked nodes at each height. Instead, we maintain marked nodes in a vector of quadruples; one for each level. The first 2 links are for runs, where a leader can be either of the 2 links. The second 2 links are for singletons. As the leader and singleton lists are doubly-linked, we need 4 additional links per level. At each node we maintain its height and its type. Knowing the type, there are at most 2 positions at which a link to a node can be found, so that marking and unmarking remain in $O(1)$. Maintaining pointers for the leaders and chairmen in doubly-linked list can be avoided by using a bit-vector set implementation. To find any member in the set we compute any (or the most significant) bit that is set to 1. We additionally observe that a refined implementation can save 1 link per node. First of all, the height of a node (already present in the implementation of Rasmussen [19]) can be packed into a single byte. A closer look shows that its representation requires $\log \log n$ bits. This is much less than a link, since with six bits we can cope with heaps of $2^{64} = 1.844 \cdot 10^{19}$ nodes, which is sufficient for all practical purposes. Maintaining the type of a node requires two additional bits. This allows to pack the heights and the types into a single byte. More precisely, using a bit-array implementation (as available in C/C++), both informations still require only one byte per node in addition to successor and parent links. Hence, we save one link per node. Essentially, with our refinement, we require $2n + O(\log n)$ words and $n$ bytes\(^1\). 4 Experiments We conducted experiments on 32-bit and 64-bit Linux PCs. We optimized the GCC binary (with flag -O2). As competitors to rank-relaxed weak queues, we chose Fibonacci heaps, and $k$-ary heaps from the LEDA library [16] (we used the publically available free 32-bit version for this purpose). We also adapted an efficient pairing heap implementation of Irit Katriel (based on work of [20]) that was used in [17]. Our space optimized implementation of rank-relaxed weak queues assumes that pointers to the elements for decreasing a key and deleting an element to modify are known. For a more flexible access, one would need a pointer/iterator to the elements to track their actual moves. \(^1\)As a time-space trade-off, the actual implementation does use left, right and parent pointers yielding a space requirement of $3n + O(\log n)$ words and $n$ bytes. <table> <thead> <tr> <th></th> <th>25,000,000 Integers</th> <th>50,000,000 Integers</th> </tr> </thead> <tbody> <tr> <td></td> <td>Ins</td> <td>DecKey</td> </tr> <tr> <td>Rank-Rel.</td> <td>0.048</td> <td>0.223</td> </tr> <tr> <td>Pairing</td> <td>0.010</td> <td>0.020</td> </tr> <tr> <td>Fibonacci</td> <td>0.062</td> <td>0.116</td> </tr> <tr> <td>k-ary</td> <td>0.136</td> <td>0.091</td> </tr> </tbody> </table> Table 1: Performances per operation for 32-bit priority queues. ### 4.1 32-Bit CPU Our first set of experiments is conducted on a CPU of 3.2 GHz (AMD Athlon), with 2GB RAM. As this is a 32-bit machine, one can construct a 64K-sized table with 65,536 entries denoting the most significant bit of all 16-bit numbers.\(^2\) In Table 1 we measured the time for inserting \(n\) integers, randomly assigned to values from \(n\) to \(2n - 1\). Next, we decreased their value by 10 and continue deleting all \(n\) minima. CPU user times are provided in \(\mu\)-seconds per operation. The bottom entries of the tables refer to results of LEDA, the top ones are alternative implementations. The lack of results in one row is due to the fact that Fibonacci heaps ran out of space. In Table 2 we measured the time for inserting \(n\) strings, randomly assigned to ASCII values from \(100n\) to \(101n - 1\) (which avoids underflows). Next, we decrease the key by a random value in \([0, n - 1]\) and successively delete \(n\) minima. We see that Fibonacci and other heap implementations are inferior and pairing heaps are less effective on a larger set of elements. ### 4.2 64-Bit CPU Our second set of experiments is conducted on one core of the Intel i7-920 CPU \(^3\) with 2.66 GHz; and 12GB RAM. We used the same setting as before,\(^4\) \(^2\)There are some alternative options to quickly compute the most significant bit in an unsigned int \(x\), mostly based on considering \(x \& -x\). Options to identify the position of the bit in the result include converting it to a float, a modulo computation, or a multiplication. We experimented with the latter and got slightly better results than with the 64K table. \(^3\)As the i7 architecture supports the population count (POPCNT) command in SSE4.2, but not LZCNT\(^4\), we used a iterative approach to determine the most significant bit in the 64-bit vector, operating in \(\log 64 = 6\) steps. Table 2: Performance of 32-bit priority queues on strings. but limited our attention to the pairing heap and rank-relaxed weak queue implementations. In Table 3 we scaled the experiment from 25 to 225 million integers, after which RAM became exhausted (for both pairing heaps and rank-relaxed weak queues). As before, pairing heaps are faster in performing insert and delete-key, but slower on delete-min. As the latter dominates the running times, for large number of elements, the performance of pairing heaps is inferior. Table 3: Performance of 64-bit priority queues. Table 4 displays the total number of element comparisons for the experiment (including \( n \) inserts, \( n \) decrease-keys and \( n \) delete-mins). As expected, we see that rank-relaxed weak queues are clearly superior to pairing heaps. <table> <thead> <tr> <th>Elements</th> <th>Pairing</th> <th>Rank-Rel.</th> </tr> </thead> <tbody> <tr> <td>25,000,000</td> <td>1,117,868,044</td> <td>969,285,934</td> </tr> <tr> <td>50,000,000</td> <td>2,341,540,962</td> <td>2,014,524,909</td> </tr> <tr> <td>75,000,000</td> <td>3,604,956,553</td> <td>3,091,500,382</td> </tr> <tr> <td>100,000,000</td> <td>4,894,251,738</td> <td>4,178,886,163</td> </tr> <tr> <td>125,000,000</td> <td>6,202,768,881</td> <td>5,279,851,817</td> </tr> <tr> <td>150,000,000</td> <td>7,526,500,750</td> <td>6,408,502,237</td> </tr> <tr> <td>175,000,000</td> <td>8,863,051,572</td> <td>7,524,243,367</td> </tr> <tr> <td>200,000,000</td> <td>10,210,578,621</td> <td>8,656,277,841</td> </tr> <tr> <td>225,000,000</td> <td>11,567,978,225</td> <td>9,796,509,293</td> </tr> </tbody> </table> Table 4: Number of comparisons for priority queues. 5 Conclusion, Discussion and Future Work To push the practical effectiveness of priority queues we have improved the run-relaxed to rank-relaxed weak queues. They outperform Fibonacci heaps on moderate, and pairing heaps on a larger set of elements or on complex comparisons. The refinement we suggest relies on the property of constantly bounded buckets at each height level. Our vision is a conceptually simple structure with good theoretical and practical performance for substituting Fibonacci and pairing heaps in text books and libraries. At this point we emphasize that although limited to programmers not only data structure performance, the empirical comparison is among these structures is rather fair, as all three implementations maintain memory for node allocation on their own. On the other hand, by using (resizable) arrays for this purpose, the implementations do affect their theoretical worst-case performance guarantees. Despite the good practical performance, rank-relaxed weak queues are not a clear-cut winner compared to, e.g., pairing heaps. Consider a graph application where the priority queues are used. The running time of the resulting program is proportional to $m + n \log n$, where $m$ is the number of edges and $n$ is the number of nodes. When $m$ is large, the first term dominates the overall costs. And the constant factor for this term is determined by decrease-key. The decrease-key operation is simply too slow for weak queues and its relatives to beat pairing heaps in this setting. The price we pay similar to rank-relaxed heaps [4] and in contrast to run-relaxed queues, is... that decrease-key now operates in amortized (instead of worst-case) constant time. The apparent question is, if we can get back to worst-case constant time, while providing the effectiveness of constantly bounded lists. Moreover, ap- plying λ-reduction eagerly may result in restructuring transformations that would not be necessary if delayed reductions were applied (e.g., singletons might be eliminated due to an unmarking before the corresponding singleton transformation applies). The increased speed, however, indicates that accel- erated restructuring is more important than savings obtained by maintaining a slightly larger node store. Due to the less complex structure, extensions to two-tier [11] (resp. mul- tipartite [10]) priority queues with log \( n + O(\log \log n) \) (resp. log \( n + O(1) \)) element comparisons for a delete might be easier to realize. However, we expect practical impact only for very complex keys, given that only log \( n \) element comparisons are currently needed to retrieve the minimum element. Other interesting structures to compare with in future are quickheaps [18] and violation heaps [8]. Moreover, a new variant of pairing heaps, assumed to be simpler, also builds on collections of binary trees [13]. Relaxed heaps structures have been shown to be efficient in the EREW PRAM model for shortest path, minimum spanning trees, minimum cost flow and other graph-related algorithms [4]. This suggests to study if they can effectively operate on graphics processing units in general proposed program- ing languages environments like NVIDIA’s CUDA. **Acknowledgement** Thanks to Peter Sanders for insightful comments on advanced bit hacks and new trends in processor architectures and together with his PhD. students Ospinov/Singler for the access to the advanced pair- ing heap implementation of Irit Katriel that has been used in [17]; Jyrki Katajainen for naming the data structure *rank-relaxed weak queues*, and to initiate a continuation of this research; Jens Rasmussen for providing access to the code; Martin Dietzfelbinger for proof reading. Last but not least, the author wants to thank Jan Vahrenhold, Susanne Albers and Petra Mutzel for the support that this research is worth continuing. References
{"Source-Url": "https://www.uni-bremen.de/fileadmin/user_upload/fachbereiche/fb3/tzi/publications/Tecnical-report/54_tr-rrq.pdf", "len_cl100k_base": 7892, "olmocr-version": "0.1.51", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 48504, "total-output-tokens": 10007, "length": "2e12", "weborganizer": {"__label__adult": 0.0005402565002441406, "__label__art_design": 0.0005064010620117188, "__label__crime_law": 0.0006494522094726562, "__label__education_jobs": 0.0006537437438964844, "__label__entertainment": 0.00013005733489990234, "__label__fashion_beauty": 0.00026345252990722656, "__label__finance_business": 0.0003426074981689453, "__label__food_dining": 0.0006427764892578125, "__label__games": 0.0008921623229980469, "__label__hardware": 0.0023136138916015625, "__label__health": 0.0012540817260742188, "__label__history": 0.0005421638488769531, "__label__home_hobbies": 0.0001908540725708008, "__label__industrial": 0.0008726119995117188, "__label__literature": 0.000335693359375, "__label__politics": 0.000537872314453125, "__label__religion": 0.0009098052978515624, "__label__science_tech": 0.169677734375, "__label__social_life": 0.00013971328735351562, "__label__software": 0.006809234619140625, "__label__software_dev": 0.8095703125, "__label__sports_fitness": 0.0005631446838378906, "__label__transportation": 0.0012235641479492188, "__label__travel": 0.0003399848937988281}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35521, 0.05191]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35521, 0.48651]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35521, 0.89036]], "google_gemma-3-12b-it_contains_pii": [[0, 136, false], [136, 136, null], [136, 1008, null], [1008, 3708, null], [3708, 6264, null], [6264, 8281, null], [8281, 11066, null], [11066, 11219, null], [11219, 14139, null], [14139, 16573, null], [16573, 18503, null], [18503, 20393, null], [20393, 22716, null], [22716, 25152, null], [25152, 27634, null], [27634, 28452, null], [28452, 30637, null], [30637, 32888, null], [32888, 34433, null], [34433, 35521, null]], "google_gemma-3-12b-it_is_public_document": [[0, 136, true], [136, 136, null], [136, 1008, null], [1008, 3708, null], [3708, 6264, null], [6264, 8281, null], [8281, 11066, null], [11066, 11219, null], [11219, 14139, null], [14139, 16573, null], [16573, 18503, null], [18503, 20393, null], [20393, 22716, null], [22716, 25152, null], [25152, 27634, null], [27634, 28452, null], [28452, 30637, null], [30637, 32888, null], [32888, 34433, null], [34433, 35521, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35521, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35521, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35521, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35521, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35521, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35521, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35521, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35521, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35521, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35521, null]], "pdf_page_numbers": [[0, 136, 1], [136, 136, 2], [136, 1008, 3], [1008, 3708, 4], [3708, 6264, 5], [6264, 8281, 6], [8281, 11066, 7], [11066, 11219, 8], [11219, 14139, 9], [14139, 16573, 10], [16573, 18503, 11], [18503, 20393, 12], [20393, 22716, 13], [22716, 25152, 14], [25152, 27634, 15], [27634, 28452, 16], [28452, 30637, 17], [30637, 32888, 18], [32888, 34433, 19], [34433, 35521, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35521, 0.08036]]}
olmocr_science_pdfs
2024-12-04
2024-12-04
049c857b19fd298f242cab9682e81e589f5bc7dd
View-Driven Optimization of Database-Backed Web Applications Cong Yan University of Washington congyan@cs.washington.edu Alvin Cheung UC Berkeley akcheung@cs.berkeley.edu Junwen Yang University of Chicago {junwen, shanlu}@uchicago.edu ABSTRACT This paper describes HYPERLOOP, a system for optimizing database-backed web applications (DBWAs). Current approaches in optimizing DBWAs focus on partitioning the application among the browser, application server, and the database, and rely on each component to optimize their portion individually without developer intervention. We argue that this approach misses the goal of DBWAs in optimizing for end-user experience, and fails to leverage domain-specific knowledge that DBWA developers have. For instance, a news website might prioritize loading of the news headlines, even at the expense of slowing down loading of other visual elements on the page. HYPERLOOP illustrates the idea of view-driven optimization by allowing developers to specify priorities for each of the elements on the webpage, and uses such information to drive optimization of the entire webpage. HYPERLOOP currently focus on optimizing for render time of webpage components, and our preliminary results show that this view-driven approach can substantially improve DBWA performance by leveraging developer provided application knowledge. 1. INTRODUCTION From banking to social networking, we interact with database-backed web applications (DBWAs) on a daily basis. Unlike transactional or analytical applications, DBWAs are structured in a three-tier manner: a presentation tier that is executed by the web browser called the view, an application tier that resides on the application server, along with a storage tier consisting of queries and persistent data managed by the database. Such application executes when an end user visits a website. The web server, upon receiving the request, runs the corresponding hosted application that interacts with the storage tier to manipulate persistent data. The query results are returned to the hosted application on the web server. The view tier then assembles the results and renders them into a webpage to be displayed on the browser. In principle, this three-tier architecture eases web application development: the DBWA components are partitioned and can be optimized by their respective hosts (i.e., the browser, web server, and the database). In practice, however, optimizing such applications is extremely difficult. Unlike transactional applications that focus on optimizing for throughput, DBWAs instead focus on end user experience (e.g., interactive websites), which often translates to the time taken to render the resulting webpage. Recent studies have shown that every 0.5s of latency in website rendering reduces website traffic by 20% [18], and that users will abandon a site if it takes longer than 3s to load [9]. In such web applications, load time is not only caused by the view tier, but also dependent on the amount of time taken for the application and storage tiers to execute application logic and queries. The three-tier architecture makes it difficult for developers to optimize their DBWAs: while end users only interact with the view tier, developers need to reason about how the page is generated through a complex myriad of code that spans the software stack. To make things worse, webpages typically consist of multiple view components (e.g., tables, buttons, text blocks, etc.), with each component rendered using different code paths. In view design literature [5, 1], it is well-known that not all view components are created equal: a user might perceive a news website to have loaded already once the news headlines have appeared on the screen, even though the rest of the page has not been fully loaded yet. This has led to the development of asynchronous loading libraries allowing developers to modify their DBWA code to load page elements at different times, even at the expense of slowing down the rest of the page. Such tradeoffs are prevalent in DBWA design: dividing a long list of items into multiple, shorter lists and rendering each of them across multiple pages (pagination), pre-loading data that is likely to be used in subsequent pages that the user will visit (caching), etc. Today developers make such tradeoffs manually by trying to change the code across layers and see how that impacts each page element, and repeating the process until the best design is reached. However, we are unaware of any system that would systematically capture such “view-specific” knowledge from developers, and exploit them for optimization of DBWAs. We argue that this page element-wise “trial-and-error” optimization of DBWAs is wrong. Instead, we believe the optimization of DBWAs should be view-driven by the domain-specific knowledge that developers possess. In this paper, we describe HYPERLOOP, a new system we are designing with that purpose. HYPERLOOP concretizes view-driven optimization by allowing DBWA developers to provide domain-specific knowledge as priority labels for each webpage element to indicate those that should be rendered first. Given priorities and a resource budget (HYPERLOOP currently supports specifying total memory available to store data in memory), we envision HYPERLOOP to automatically analyze the 1Other notions of domain-specific knowledge are certainly possible, e.g., impact on user experience, interactivity, etc. We currently use the time to render as it is an easily quantifiable measure. DBWA code to devise a plan to render each of the pages in the application, with the goal to reduce the render time of the high priority elements as much as possible. HYPERLOOP achieves this by applying different optimization across all three tiers, from changing the layout of each page to customizing data structures to store persistent data in memory. To help developers assign priorities, HYPERLOOP comes with a static analyzer that estimates render times and presents the results via HYPERLOOP’s user interface. While we are still in the early implementation phase of HYPERLOOP, our initial experiments have shown promising results: we can improve the start render time (i.e., time taken for the first elapsed time) of high priority webpage elements in real-world DBWAs by 27×. We believe this illustrates the potential of view-driven optimization of DBWAs, with HYPERLOOP presenting an initial prototype that implements this concept. 2. HYPERLOOP OVERVIEW We now discuss how DBWA developers can use HYPERLOOP to improve their applications. Figure 2 shows a code fragment from Tracks [3], a popular Ruby on Rails DBWA for task management. Figure 1 shows a page from the Tracks listing of projects created by a user, where each project contains a list of todo actions. This page has three panels. The left panel shows a list of undone projects (we call a project “undone” if it contains undone todos), with the detail of each todo shown when clicked. The upper right panel shows a form where user can add a new todo, and the bottom right panel shows a list of active projects and its todo count. Figure 1 shows the abridged DBWA code used to render this page. Lines 1-6 show how persistent data is organized into the User, Project and Todo classes. It also specifies the relationship between the classes, for instance, a project has many todos, as implemented as foreign key constraint in the database. Line 8 retrieves the list of projects that belongs to the current user from the database and into the Ruby array variable @projects. Line 9 then filters @projects to return those to be rendered on the left panels based on the number of undone todos. The filter for the right panel selects the active projects in Line 10. The code uses the where API provided by the Rails library which translates the object query to SQL queries as shown in the bottom of Figure 2. Lines 11–18 show the view file written in HTML with embedded Ruby code. The bottom of Figure 2 shows the SQL queries translated by the Rails library to generate this page. Q1 retrieves the current user, followed by Q2 to retrieve her projects. A number of queries (e.g., Q3) are issued to get the count of undone todo for each project. Similarly, some queries are issued to get the todos for each project (Q4) and note for each todo (Q5) which are on the left panel, as well as todo count for each project (Q6) on the right. HYPERLOOP allows developers to improve performance via its view-centric interface. Figure 3 shows the HYPERLOOP workflow. To use HYPERLOOP, the developer only needs to label the high priority elements on the webpage, and HYPERLOOP will automatically analyze the application code to suggest different ways to render the page by reducing the render time of high priority elements, while possibly increasing the render time of the low priority ones. We envision that HYPERLOOP will make different tradeoffs based on how the elements are labeled, and propose different render plans to the developer to further refine. To help developers assign priorities, HYPERLOOP comes with an analyzer that statically estimates the load time of each page element, given the amount of data currently stored in the database. The estimates are presented to the developer as a heatmap, as shown in Figure 4. We envision other analyses will also be useful in aiding the developer to assign priorities, for instance the amount of memory used, query plans used to retrieve rendered data, etc. For the example shown in Figure 1, suppose the developer decides to label the list of undone projects as high priority, based on the current load time estimate. Given this information, HYPERLOOP will suggest different ways to render the page and optimizes the data processing leveraging priority. For instance, loading the undone projects panel asynchronously (to be discussed in Section 6), and furthermore storing them away in a dedicated list in memory for fast retrieval (to be discussed in Section 7). If the developer instead labels the active projects as high priority, and the undone projects as low priority, then HYPERLOOP will generate 3. PRIORITY-DRIVEN OPTIMIZATION HYPERLOOP applies various optimization to different webpage elements depending on their priorities provided by the developer. These optimization often provides speedup to certain web-page elements (i.e., high-priority ones) at the cost of the loading time or the rendering quality of other elements (i.e., low-priority ones), and hence is not explored by traditional optimization techniques. We present a few optimization of this type below. We will then discuss how we implement these optimization in the next few sections. Asynchronous loading. Asynchronously loading a view element \( e \) allows web users to see \( e \) before other potentially slow elements get loaded. The downside is that the total amount of computation or the total number of queries issued to the database may increase, because previously shared computation across asynchronously loaded components can no longer be shared. This optimization can be applied to high-priority elements, and will require view changes (Section 5) and application-tier changes (Section 6). Pre-computing. While generating one web-page page \( p_t \), one can pre-compute contents needed to generate the next page \( p_{t+1} \), which the web user is likely to visit next through a link on \( p_t \). This will speedup the loading time of \( p_{t+1} \) at the cost of the loading time of \( p_t \). HYPERLOOP supports this optimization only when the developer provides high priority to the link on \( p_t \) that points to \( p_{t+1} \). It is implemented through our app-tier optimizer (Section 6). Optimizing for heavy reads or writes. There are often both read and write accesses to the same database table. Our database layout generator (Section 7) can optimize for either heavy-write workload, at the cost of read performance, or heavy-read workload, at the cost of write performance, based on the priority information provided by the developer. Pagination. It often takes long time to retrieve and display a long list of items. One way to improve performance is to only show the first \( K \) items in the list and allow users to navigate to subsequent pages to view the remaining items. This change can greatly improve the loading time of the list, but at the cost of users longer time to see later part of the list. It can be applied to a list that contains both high and low priority items and items, or an overall low priority list whose content-viewing experience is less important than its loading speed. This is implemented in HYPERLOOP’s view designer (Section 5) and app-tier optimizer (Section 6). Approximation. Approximation can be applied to many aggregation queries, such as showing “you have more than 100 TODOs” instead of “you have 321 TODOs.” Like pagination, approximation presents a tradeoff between loading speed and the quality (accuracy) of the content, and is suitable for low priority elements. This is implemented in HYPERLOOP’s view designer (Section 5) and the app-tier optimizer (Section 6). Using stale data. Caching data in memory and updating only periodically can improve performance at the cost of data quality and freshness. Priorities provided by developers can help HYPERLOOP determine which page element to cache. This is implemented in HYPERLOOP’s app-tier optimizer (Section 6) and layout generator (Section 7). 4. HYPERLOOP’S USER INTERFACE Figure 4: Heatmap showing the estimated loading cost of each webpage element, along with priority assignment and rendering recommendations generated by HYPERLOOP. HYPERLOOP provides a unique interface for developers to understand the performance of their application and provide priority information. First, it presents the statically estimated cost (to be discussed in Section 6) to render each HTML element as a heat map. in the browser. This cost includes the time to retrieve the data from the database and process it in the application server. Figure 4(a) shows an example heat map of the webpage shown in Figure 1, where darker color means higher cost. For example, the left and bottom right panels have a high cost because of the large number of projects stored in the database, making the queries that involve them (e.g., `Q1 in Figure 2) slow. As discussed in Section 2, after examining the estimates, the developer can click on an HTML element on the page to indicate priority. We intend the interface to support applying the same priority to a group of elements after highlighting them. HYPERLOOP supports different priority levels as shown in Figure 4. After assigning priorities, the developer would click on the “analyze” button on the right. HYPERLOOP then analyzes each element together with its priority, and provides a list of suggestions as shown in Figure 4. Some of these suggestions require further user input, for instance, how many objects to show on each paginated page. All of the suggestions are only related to webpage look and functionality designs, and the developer needs no database knowledge to choose a suggestion. We describe the list of suggested changes in Section 5. The developer can right-click each element to view the rendering plan generated by HYPERLOOP. After choosing one of the plans, HYPERLOOP will change the application, re-estimates the cost, and renders the new webpage with a new heatmap, like the one shown in Figure 5, where the left panel is now loaded first. HYPERLOOP not only suggests the rendering plan but also optimizes query processing based on priority assignment, as described in Section 6.2 and Section 7.2. These optimization strategies often involve tradeoffs, for instance, accelerating a query that retrieves data for high-priority HTML tags by slowing down queries for low-priority tags slightly. HYPERLOOP renders a list of such optimizations in the IDE and lets developers enable and disable them individually (by default HYPERLOOP applies all optimizations), as shown in Figure 6. The developer can then ask HYPERLOOP to regenerate the heatmap to see the effect of certain optimization(s). Doing so allows the developer to do A/B testing and understand how these optimizations interact with each other. Furthermore, HYPERLOOP can show the refactored code (after choosing rendering recommendations and a set of optimizations) if the developer wants to know the change in more detail, as shown in Figure 6. HYPERLOOP supports assigning priority to both HTML tags as well as a form or a hyperlink to another webpage. If a form is assigned high priority, HYPERLOOP will attempt to reduce the time taken to process the form by changing the in-memory data layout of persistent data (to be discussed in Section 7). If the hyperlink is assigned high priority, HYPERLOOP will optimize the render time of the linked page, possibly by increasing the time taken to load the current page. As mentioned in Section 2, the developer can visualize such tradeoffs and reassign priorities using the HYPERLOOP interface as needed. We next discuss the design of HYPERLOOP as shown in Figure 3 and how different tradeoffs are made given priority information. 5. View Designer The View designer analyzes and transforms view files that define webpages’ look and functionality. Its purpose is to identify which application tier object is rendered by which HTML element, and passes this information to the Application-tier optimizer. It also carries out priority-driven optimization as described below. Asynchronous loading. To asynchronously load an HTML element $e$, the View designer splits the original HTML file into two files, one rendering $e$ and the other rendering the rest of the page. To do so, the View designer first creates a new view file $v$ to render $e$, and then creates new code to reside on the application server to compute the contents needed by $e$ to render the view file $v$, and finally replaces $e$ in the original view file with an AJAX request. Pagination. The View designer detects pagination opportunities by checking whether an HTML element is rendering a list of Ruby objects in a loop. After the developer decides to paginate an element, the View designer rewrites the view file to render a constant number of elements first and adds a page navigation bar, as described in prior work [26]. It passes the design decision to the Application-tier optimizer who will change the query to return limited results (by adding `LIMIT` and `OFFSET`). Pagination itself can greatly accelerate the start render time. For example, paginating the left panel of Figure 1 to show 20 projects per page (out of 2K projects altogether) accelerates the panel rendering by 27x. Approximation. The View designer detects approximation opportunities by checking if an HTML element is displaying a value that is returned by an aggregation query. Once the developer accepts an approximation optimization opportunity, the View designer changes the view file to add “at least” or “at most” before the aggregation value and passes it to the Application-tier optimizer to change the query to count only N values by adding a `LIMIT` clause. 6. Application-tier Optimizer We now describe the static analysis framework in HYPERLOOP’s Application-tier optimizer that enables a wide variety of optimization, including basic optimization that can be applied without priority information. Then we give examples on how it supports priority-driven optimization. 6.1 Analysis framework The Application-tier optimizer statically analyzes the application code to understand 1) how the application computes and generates data that is to be rendered at each view component; 2) the flow of actions across consecutive pages. ![Action Flow Graph (AFG) example](image) Figure 7: An Action Flow Graph (AFG) example. To enable such analysis, the Application-tier optimizer constructs Action Flow Graph (AFG). An example is shown in Figure 7. An AFG is a flow graph consists of a set of hypernodes and next-action edges. Each hypernode represents a controller action, i.e., the complete code path used to generate a webpage. The next-action edge links pairs of actions \((a_1, a_2)\) if \(a_2\) can be invoked from \(a_1\) as a result of a user interaction, for instance, by clicking a webpage. To identify such interactions, Application-tier optimizer identifies the HTML elements that contain URLs or forms and determines the subsequent actions that may be triggered as a result of an user interaction, for instance clicking on an URL or submitting a form. Inside each hypernode, the Application-tier optimizer builds an action dependency graph (ADG). Every node \(n\) in the ADG represents a statement in the corresponding action. Every edge \(e\) represents either control dependency or data dependency. Nodes in the ADG are tagged as query nodes if they issue queries, with their data dependency edges labeled with database table and column names. Using the ADG, the Application-tier optimizer can trace back from nodes that render data (e.g., N7 in Figure 7) to all the queries which the data-rendering node has control or data dependence upon (e.g., N1 and N2). These queries are considered as contributing queries. This analysis enables the Application-tier optimizer to perform many types of optimization as introduced in earlier work [23]. Some optimization always improves performance, e.g., adding projection to load only fields being used. Others, however, requires making tradeoffs, which we discuss next. 6.2 Priority-driven optimization We now discuss a few examples on how the Application-tier optimizer supports priority-driven optimization. **Example 1: Splitting queries.** Very often DBWAs would issue a query to retrieve one set of data that will be filtered/processed in multiple ways to render multiple view components, as doing so can reduce duplicate work in rendering related view components. For example, a web page may show both a list of projects and a total count of these projects. The application can issue a single query to retrieve all projects while counting them in memory. Another example is the query to retrieve all projects (Q1 in Figure 2) into a Ruby array @projects that is filtered separately in memory to obtain @left_projects and @right_projects. Although the shared query helps to reduce the total number of queries issued and the overall computation required to render the page, it could be sub-optimal if the multiple view components supported by it have different priorities. Specifically, to carry out an asynchronous loading optimization discussed in Section 3, the Application-tier optimizer splits a shared query if the result is used in asynchronously loaded elements. For example, if the left panel has high priority and is decided to be asynchronously loaded from other parts, the optimizer splits Q1 into Q1 and Qr as shown below: **Listing 1: Example application code illustrating query splitting** ``` Q1: @left_projects = user.projects.where(undone_count>0).include(todos, note)) Qr: @right_projects = user.projects.where(status='active').include(todos.count) ``` After the split, each query retrieves the data shown on the corresponding panel. Doing so causes the projects shown in the two panels to be retrieved in separate queries, but allows separate optimization of Q1, such as eager-loading of todos and notes for the left panel query (where and include are query functions to filter data using predicate and to eager-load the associated objects). Besides, the Application-tier optimizer will pass the design decision to the Layout generator and the splitting will allow generator to customize a layout for Q1 (described in Section 7). As an illustration, with 4K total projects and 50% of them to show on the left panel, splitting the query and optimizing Q1 as mentioned above reduces the query time of Q1 from 5.1s to 0.5s, and the overall start render time from 13.7s to 6.1s. **Example 2: Pre-loading data.** By default, each page is computed from scratch upon receiving an HTTP request. However, developers might know the next page(s) the user will likely visit and wish to pre-load data to accelerate loading of the next page, even at the expense of increasing the load time of the current page slightly. An example is when a user visits the home page of a forum and then visits different posts by clicking on the hyperlink. As the home page shows only the title of each post, generating it once and caching it on the client slide would be optimal, but subsequent pages of individual posts might be impractical to cache as each may contain large images and contents. Yet, the developer might want the posts to load fast and is willing to trade off the performance of the home page. In that case, she can indicate priorities on the current page and HYPERLOOP will pre-load data accordingly. We use the Sugar forum application [2] as an illustration, where the database query retrieving the posts on its homepage selects not only the title but also the contents of each post. With a forum of 500 posts on the home page, doing so shortens each of the post page rendering time by 82% while increasing the home page load time by 12%. **Example 3: Caching common subexpressions.** Common subexpressions are often shared among queries across consecutive pages [23]. Subsequent pages can reuse the results of these common subexpressions with a slight overhead for the current page due to caching. The Application-tier optimizer applies such caching if the developer chooses to pre-load data for subsequent pages after labeling with high priority. For example, for a page that shows the first 40 recent posts, the developer can assign the subsequent pages with high priority, as users will likely explore beyond the most recent 40 posts. The queries for the first and second pages are shown in Listing 2. They share the same subexpression that sorts the projects. In this case, the Application-tier optimizer will rewrite the queries to sort the posts, store the sorted results in a list and cache them such that the queries for all subsequent pages can simply return from this sorted list, as shown in Listing 3. With 10K posts, doing so slightly sacrifices the render time of the first page (an increase by 5%) but speedup the other pages by 2.3×. Listing 2: Two queries sharing a common sub-expression P1 : @posts = post.order(:created).limit(40).offset(0) P2 : @posts = post.order(:created).limit(40).offset(40) Listing 3: Common sub-expression result is cached and reused P1 : @posts_all = post.order(:created) @posts = @posts_all.limit(40).offset(0) P2 : @posts = @posts_all.limit(40).offset(40) Example 4: Using stale data. It may be worthwhile to show stale data in a low-priority HTML element for better performance. The Application-tier optimizer implements this by changing the ap- lication code to cache data rendered in labeled HTML elements, and reuse it when the same element is rendered subsequently. For example, a developer may think the right bottom panel in Figure 1 occupies only a small and unimportant part of a webpage and thus labels it as low priority. The Application-tier optimizer will then suggest to cache the list of active projects, the total count, and the count of todos for each project. Doing so eliminates most of the queries to the database when rendering the page. To evaluate this, we use 2K active projects shown on the right panel of Figure 1, and the total rendering time is reduced by 65% after data for the right panel is cached. 7. LAYOUT GENERATOR In this section we first introduce the basic optimization that the Layout generator can do without user interaction. Then we give examples on how it performs priority-driven optimization. 7.1 Basic optimizations HYPERLOOP’s Layout generator generates customized data lay- out for an application. It takes all the queries that can potentially be issued by the application and finds the best in-memory data layout to store the application data in order to improve the overall query performance. It also generates query plans and estimates the cost of each query. The Layout generator’s data layout and query plan search space are specifically designed for object-oriented database backed ap- lications like DBWAs. The layout design space is inspired by the object query interface. For instance, because queries often returns objects and nested objects, it is expensive to frequently join multi- ple tables and furthermore convert the tabular join results to nested objects. So the layout space incorporates not only the traditional tabular layout and indexes, but also deeply nested layouts. The Layout generator first enumerates the possible data layout to store the data for each individual query as well as query plans that use particular layouts. Then it finds out the optimal layout for the entire workload by formulating an integer linear programming (ILP) problem. In this formulation, each data structure in every possible layout is assigned a binary variable to indicate whether it is included in the final layout; similarly for each query plan. It also estimates the memory cost for each data structure and the time for each query plan. For write queries, it generates one plan to update one data structure. The optimization constraints state that the over- all cost of all included data structures are within the memory bound provided by the user, while the optimization goal is to minimize the overall runtime of all queries. It uses state-of-the-art solvers to solve the ILP problem and constructs the final layout accordingly. Finally, it generates an implementation of both data layouts and query plans. 7.2 Priority-driven optimization The Layout generator supports priority-driven optimization. All examples below reduce the load time for the selected elements labeled with high priority at the expense of possibly slowing down other elements in the same page. Example 1: Changing read-query weight. The Layout gen- erator can design a data layout that better optimizes queries with high priority. Since it formulates the search of data layout into an optimization problem where the optimization goal is the sum of runtime cost of all queries, it can simply assign higher weights to those queries whose results are needed to render higher-priority HTML elements, as identified by HYPERLOOP’s Application-tier optimizer. For example, consider Q1 and Q2 shown in Listing 1 before as- signing priorities. The Layout generator may produce a layout as shown in Figure 8(a). If the developer assigns higher priority to the left panel, Q1 will receive a higher weight, which results in the layout shown in Figure 8(b). In this layout, the projects belonging to a user are stored as a nested list within user; the todos are stored as nested objects in each project; and similarly for the notes. This layout is highly optimized for Q1: retrieving the @left_projects does not need to perform a join on project, todo, and note com- pared to using layout (a). This layout also avoids the expensive de- serialization from denormalized table to nested objects. With this layout, the query time for the left panel is further reduced to 0.5s (compared to 5.1s using the tabular layout). Example 2: Changing write query weight. An HTML form might be assigned high priority if it is frequently used, such as the form shown in upper right of Figure 1 if new todos are frequently added. The queries used for new todo submission are shown below: @todo = Todos.new(name=param['name'], ...) @todo.note = Notes.new(content=param['note_content'],...) @todo.save In this case, the Layout generator adds larger weight to the write query creating new todo with embedded notes. It would then gen- erate data layout in Figure 8(c). Compared to (b) where inserting a new todo needs an extra query to locate the project that this todo belongs to, (c) is better optimized for adding new todos because the todos are stored as a top-level array and an insertion only appends to the array without the extra read query. Example 3: Leveraging infrequent update (stale data). If an HTML element is assigned low priority, the query that retrieves data for this element can potentially read from stale data. The Lay- out generator can generate a more efficient data layout for this type of read queries. For example, if the count of all active projects as shown in the right bottom panel of Figure 1) is assigned the lowest priority, then the generator will assign a very low weight to any plan that updates the data structure only used to compute this count. The generated data layout will pre-compute the count and store it in memory, which reduces the end-to-end webpage time when the count is rendered. For instance, using stale data for the active project and context count in the right bottom panel can accelerate rendering the panel by 26% (after the list is paginated). Without knowing the priority, it is unlikely to pre-compute the count because any delete query triggers a re-computation of this count, which greatly increases the total query cost. 8. RELATED WORK Priority in Software Development. The concept of “priority” has been widely used in software engineering. For example, in agile programming, one common practice is to list user stories (i.e., user-facing software features) and give them priority in the development cycle [4, 6]. We use the same concept in making the view-performance tradeoffs. However, rather than using priorities to assist in ordering which software feature to implement first, HYPERLOOP instead leverages priorities to improve the page-viewing experience of end-users, with the assumption that high-priorities are assigned to webpage elements that are intended to catch viewer’s attention. Database Optimizations. The database community has proposed query optimizations similar to those described in the paper. For example, identifying and caching shared subexpressions in the context of multi-query optimization [16, 17], leveraging stale data like using lower consistency level in the context of transaction processing [21], automatic design of materialized views that different query weight leads to different views in the context of physical design [7, 15], etc. Although many optimizations are not new, when to implement them and how to make the tradeoff in DBWAs require developer’s knowledge and preference. We propose the design of an easy-to-use interface to leverage developer’s preference via priority and automate the optimization implementation. Optimizing Database-backed Applications. Much work has been done on discovering of performance issues of database-backed web applications, such as identifying performance problems in DBWAs, such as retrieving unneeded data [11], issuing long query chains that are difficult to optimize [10], and other API misuses [25]. Prior approaches also include solving these issues [13, 12, 14, 8, 19, 22, 24], but focus on performing semantic-preserving low-level code changes on the application automatically similar to an optimizing compiler, and they all assumed the goal is to reduce the latency in loading the entire page. HYPERLOOP is not designed to be an optimizing compiler, but instead focuses on aiding the developer prioritize page elements to optimize directly from the view, and suggests various kinds of code and view changes by leveraging the priorities provided by the developer. 9. CONCLUSION We presented HYPERLOOP, a new system that helps developers optimize DBWAs. Unlike prior approaches, HYPERLOOP recognizes that developers often make tradeoffs when designing DBWAs, and leverages developers’ knowledge to optimize DBWAs in a view-driven manner. Given priority information provided by the developer, HYPERLOOP automatically analyzes the application and suggests various design and code changes to improve the render time of different elements on the page. While still under implementation, preliminary results have shown that our view-driven approach is effective in improving end user experience of real-world DBWAs. 10. ACKNOWLEDGEMENT This work is supported in part by the National Science Foundation through grants IIS-1546083, IIS-1546543, IIS-1651489; DARPA award FA8750-16-2-0032; DOE award DE-SC0016260; the Intel-NSF CAPA center, and gifts from Adobe and Google. 11. REFERENCES
{"Source-Url": "http://people.cs.uchicago.edu/~shanlu/paper/cidr20_orm.pdf", "len_cl100k_base": 7502, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 25510, "total-output-tokens": 9295, "length": "2e12", "weborganizer": {"__label__adult": 0.0002841949462890625, "__label__art_design": 0.0002617835998535156, "__label__crime_law": 0.00021398067474365232, "__label__education_jobs": 0.00039887428283691406, "__label__entertainment": 5.1021575927734375e-05, "__label__fashion_beauty": 0.0001233816146850586, "__label__finance_business": 0.00019788742065429688, "__label__food_dining": 0.0002236366271972656, "__label__games": 0.000354766845703125, "__label__hardware": 0.0005488395690917969, "__label__health": 0.0002810955047607422, "__label__history": 0.00015616416931152344, "__label__home_hobbies": 5.27501106262207e-05, "__label__industrial": 0.00021779537200927737, "__label__literature": 0.0001575946807861328, "__label__politics": 0.00016295909881591797, "__label__religion": 0.0002524852752685547, "__label__science_tech": 0.0060577392578125, "__label__social_life": 5.537271499633789e-05, "__label__software": 0.00566864013671875, "__label__software_dev": 0.9833984375, "__label__sports_fitness": 0.00018966197967529297, "__label__transportation": 0.00034546852111816406, "__label__travel": 0.0001571178436279297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40610, 0.01763]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40610, 0.35614]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40610, 0.88519]], "google_gemma-3-12b-it_contains_pii": [[0, 5543, false], [5543, 10162, null], [10162, 13969, null], [13969, 19564, null], [19564, 26185, null], [26185, 32793, null], [32793, 40610, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5543, true], [5543, 10162, null], [10162, 13969, null], [13969, 19564, null], [19564, 26185, null], [26185, 32793, null], [32793, 40610, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40610, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40610, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40610, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40610, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40610, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40610, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40610, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40610, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40610, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40610, null]], "pdf_page_numbers": [[0, 5543, 1], [5543, 10162, 2], [10162, 13969, 3], [13969, 19564, 4], [19564, 26185, 5], [26185, 32793, 6], [32793, 40610, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40610, 0.0]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
2bc915966dc10aeb94f491ca5de78017381e98d6
[REMOVED]
{"Source-Url": "https://hal-lirmm.ccsd.cnrs.fr/lirmm-00163381/document", "len_cl100k_base": 7216, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 42079, "total-output-tokens": 9396, "length": "2e12", "weborganizer": {"__label__adult": 0.00045561790466308594, "__label__art_design": 0.0006389617919921875, "__label__crime_law": 0.0005745887756347656, "__label__education_jobs": 0.001903533935546875, "__label__entertainment": 0.000148773193359375, "__label__fashion_beauty": 0.0002799034118652344, "__label__finance_business": 0.0004572868347167969, "__label__food_dining": 0.0004901885986328125, "__label__games": 0.0007009506225585938, "__label__hardware": 0.0008687973022460938, "__label__health": 0.0010957717895507812, "__label__history": 0.0005879402160644531, "__label__home_hobbies": 0.00017786026000976562, "__label__industrial": 0.0007882118225097656, "__label__literature": 0.0008358955383300781, "__label__politics": 0.00047516822814941406, "__label__religion": 0.0008721351623535156, "__label__science_tech": 0.225341796875, "__label__social_life": 0.00024068355560302737, "__label__software": 0.01096343994140625, "__label__software_dev": 0.75048828125, "__label__sports_fitness": 0.0004303455352783203, "__label__transportation": 0.0007390975952148438, "__label__travel": 0.0002875328063964844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33068, 0.02975]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33068, 0.79061]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33068, 0.86087]], "google_gemma-3-12b-it_contains_pii": [[0, 1090, false], [1090, 3262, null], [3262, 6523, null], [6523, 7936, null], [7936, 10846, null], [10846, 13485, null], [13485, 16721, null], [16721, 19113, null], [19113, 20618, null], [20618, 22428, null], [22428, 24055, null], [24055, 24906, null], [24906, 26826, null], [26826, 28777, null], [28777, 30361, null], [30361, 33068, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1090, true], [1090, 3262, null], [3262, 6523, null], [6523, 7936, null], [7936, 10846, null], [10846, 13485, null], [13485, 16721, null], [16721, 19113, null], [19113, 20618, null], [20618, 22428, null], [22428, 24055, null], [24055, 24906, null], [24906, 26826, null], [26826, 28777, null], [28777, 30361, null], [30361, 33068, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33068, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33068, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33068, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33068, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33068, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33068, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33068, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33068, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33068, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33068, null]], "pdf_page_numbers": [[0, 1090, 1], [1090, 3262, 2], [3262, 6523, 3], [6523, 7936, 4], [7936, 10846, 5], [10846, 13485, 6], [13485, 16721, 7], [16721, 19113, 8], [19113, 20618, 9], [20618, 22428, 10], [22428, 24055, 11], [24055, 24906, 12], [24906, 26826, 13], [26826, 28777, 14], [28777, 30361, 15], [30361, 33068, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33068, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
cb9289f3c6a2b1d111ecf262f1156b69aa2bc036
# TABLE OF CONTENTS Chapter 1. CUDA Toolkit Major Components............................................................... 1 Chapter 2. New Features....................................................................................... 4 2.1. General CUDA............................................................................................ 4 2.2. CUDA Tools.............................................................................................. 5 2.2.1. CUDA Compilers.................................................................................. 5 2.2.2. CUDA Profiler...................................................................................... 5 2.2.3. CUDA Profiling Tools Interface (CUPTI).............................................. 5 2.2.4. CUDA-GDB......................................................................................... 5 2.2.5. CUDA-MEMCHECK............................................................................ 6 2.3. CUDA Libraries........................................................................................... 6 2.3.1. cuBLAS Library................................................................................... 6 2.3.2. NVIDIA Performance Primitives (NPP)............................................... 6 2.3.3. cuFFT Library..................................................................................... 6 2.3.4. cuSOLVER Library............................................................................. 6 2.3.5. cuSPARSE Library............................................................................ 7 2.3.6. Thrust Library..................................................................................... 7 Chapter 3. Deprecated Features.............................................................................. 8 Chapter 4. Resolved Issues................................................................................... 10 4.1. General CUDA........................................................................................... 10 4.2. CUDA Tools.............................................................................................. 11 4.2.1. CUDA Compilers................................................................................ 11 4.2.2. CUDA Profiler..................................................................................... 11 4.2.3. CUDA Profiling Tools Interface (CUPTI).......................................... 11 4.2.4. CUDA-GDB....................................................................................... 12 4.2.5. CUDA-MEMCHECK......................................................................... 12 4.3. CUDA Libraries........................................................................................... 12 4.3.1. cuBLAS Library................................................................................. 12 4.3.2. NVIDIA Performance Primitives (NPP).......................................... 12 Chapter 5. Known Issues...................................................................................... 13 5.1. General CUDA........................................................................................... 13 5.2. CUDA Tools.............................................................................................. 14 5.2.1. CUDA Compiler................................................................................ 14 5.2.2. CUDA Profiler.................................................................................... 14 5.2.3. CUDA-MEMCHECK......................................................................... 14 5.3. CUDA Libraries........................................................................................... 14 5.3.1. cuBLAS Library................................................................................. 14 5.4. CUDA Samples............................................................................................ 15 Chapter 6. CUDA Tegra Release Notes................................................................. 16 6.1. New Features............................................................................................ 16 6.2. Known Issues and Limitations..................................................................... 16 LIST OF TABLES Table 1 CUDA Toolkit and Compatible Driver Versions .....................................................2 Chapter 1. CUDA TOOLKIT MAJOR COMPONENTS This section provides an overview of the major components of the CUDA Toolkit and points to their locations after installation. Compiler The CUDA-C and CUDA-C++ compiler, nvcc, is found in the bin/ directory. It is built on top of the NVVM optimizer, which is itself built on top of the LLVM compiler infrastructure. Developers who want to target NVVM directly can do so using the Compiler SDK, which is available in the nvvm/ directory. Please note that the following files are compiler-internal and subject to change without any prior notice. - any file in include/crt and bin/crt - include/common_functions.h, include/device_double_functions.h, include/device_functions.h, include/host_config.h, include/host_defines.h, and include/math_functions.h - nvvm/bin/cicc - bin/cudafe++, bin/bin2c, and bin/fatbinary Tools The following development tools are available in the bin/ directory (except for Nsight Visual Studio Edition (VSE) which is installed as a plug-in to Microsoft Visual Studio). - IDEs: nsight (Linux, Mac), Nsight VSE (Windows) - Debuggers: cuda-memcheck, cuda-gdb (Linux), Nsight VSE (Windows) - Profilers: nvprof, nvvp, Nsight VSE (Windows) - Utilities: cuobjdump, nvdisasm, gpu-library-advisor Libraries The scientific and utility libraries listed below are available in the lib/ directory (DLLs on Windows are in bin/), and their interfaces are available in the include/ directory. - cublas (BLAS) - cublas_device (BLAS Kernel Interface) - cuda_occupancy (Kernel Occupancy Calculation [header file implementation]) CUDA Toolkit Major Components - **cudadevrt** (CUDA Device Runtime) - **cudart** (CUDA Runtime) - **cufft** (Fast Fourier Transform [FFT]) - **cupti** (Profiling Tools Interface) - **curand** (Random Number Generation) - **cusolver** (Dense and Sparse Direct Linear Solvers and Eigen Solvers) - **cusparse** (Sparse Matrix) - **npp** (NVIDIA Performance Primitives [image and signal processing]) - **nvblas** ("Drop-in" BLAS) - **nvccvid** (CUDA Video Decoder [Windows, Linux]) - **nvgraph** (CUDA nvGRAPH [accelerated graph analytics]) - **nvml** (NVIDIA Management Library) - **nvrtc** (CUDA Runtime Compilation) - **nvtx** (NVIDIA Tools Extension) - **thrust** (Parallel Algorithm Library [header file implementation]) CUDA Samples Code samples that illustrate how to use various CUDA and library APIs are available in the `samples/` directory on Linux and Mac, and are installed to `C:\ProgramData\NVIDIA Corporation\CUDA Samples` on Windows. On Linux and Mac, the `samples/` directory is read-only and the samples must be copied to another location if they are to be modified. Further instructions can be found in the Getting Started Guides for Linux and Mac. Documentation The most current version of these release notes can be found online at http://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html. Also, the `version.txt` file in the root directory of the toolkit will contain the version and build number of the installed toolkit. Documentation can be found in PDF form in the `doc/pdf/` directory, or in HTML form at `doc/html/index.html` and online at http://docs.nvidia.com/cuda/index.html. CUDA Driver Running a CUDA application requires the system with at least one CUDA capable GPU and a driver that is compatible with the CUDA Toolkit. For more information various GPU products that are CUDA capable, visit https://developer.nvidia.com/cuda-gpus. Each release of the CUDA Toolkit requires a minimum version of the CUDA driver. The CUDA driver is backward compatible, meaning that applications compiled against a particular version of the CUDA will continue to work on subsequent (later) driver releases. More information on compatibility can be found at https://docs.nvidia.com/cuda/cuda-c-best-practices-guide/index.html#cuda-runtime-and-driver-api-version. Table 1 CUDA Toolkit and Compatible Driver Versions <table> <thead> <tr> <th>CUDA Toolkit</th> <th>Linux x86_64 Driver Version</th> <th>Windows x86_64 Driver Version</th> </tr> </thead> <tbody> <tr> <td>CUDA 7.0 (7.0.28)</td> <td>&gt;= 346.46</td> <td>&gt;= 347.62</td> </tr> </tbody> </table> CUDA Toolkit Major Components <table> <thead> <tr> <th>CUDA Toolkit</th> <th>Linux x86_64 Driver Version</th> <th>Windows x86_64 Driver Version</th> </tr> </thead> <tbody> <tr> <td>CUDA 7.5 (7.5.16)</td> <td>&gt;= 352.31</td> <td>&gt;= 353.66</td> </tr> <tr> <td>CUDA 8.0 (8.0.44)</td> <td>&gt;= 367.48</td> <td>&gt;= 369.30</td> </tr> <tr> <td>CUDA 8.0 (8.0.61 GA2)</td> <td>&gt;= 375.26</td> <td>&gt;= 376.51</td> </tr> <tr> <td>CUDA 9.0 (9.0.76)</td> <td>&gt;= 384.81</td> <td>&gt;= 385.54</td> </tr> <tr> <td>CUDA 9.1 (9.1.85)</td> <td>&gt;= 387.26</td> <td>&gt;= 388.19</td> </tr> <tr> <td>CUDA 9.2 (9.2.88)</td> <td>&gt;= 396.14</td> <td>&gt;= 397.05</td> </tr> </tbody> </table> For convenience, the NVIDIA driver is installed as part of the CUDA Toolkit installation. Note that this driver is for development purposes and is not recommended for use in production with Tesla GPUs. For running CUDA applications in production with Tesla GPUs, it is recommended to download the latest driver for Tesla GPUs from the NVIDIA driver downloads site at http://www.nvidia.com/drivers. During the installation of the CUDA Toolkit, the installation of the NVIDIA driver may be skipped on Windows (when using the interactive or silent installation) or on Linux (by using meta packages). For more information on customizing the install process on Windows, see http://docs.nvidia.com/cuda/cuda-installation-guide-microsoft-windows/index.html#install-cuda-software. For meta packages on Linux, see https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#package-manager-metas CUDA-GDB Sources CUDA-GDB sources are available as follows: - For CUDA Toolkit 7.0 and newer, in the installation directory `extras/`. The directory is created by default during the toolkit installation unless the `rpm` or `.deb` package installer is used. In this case, the `cuda-gdb-src` package must be manually installed. - For CUDA Toolkit 6.5, 6.0, and 5.5, at https://github.com/NVIDIA/cuda-gdb. - For CUDA Toolkit 5.0 and earlier, at ftp://download.nvidia.com/CUDAOpen64/. - Upon request by sending an e-mail to mailto:oss-requests@nvidia.com. Chapter 2. NEW FEATURES The release notes for the CUDA Toolkit can be found online at http://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html. 2.1. General CUDA - Improved kernel launch latency (using the <<< >>> syntax and the cudaLaunchKernel API) for both multithreaded and multi-GPU code by up to a factor of 2 compared to CUDA 9.0. - Added support for unified memory with address translation services (ATS) on IBM POWER9. - Added arithmetic operators for the __half2 data type and a volatile assignment operator for the __half data type. - Added version 6.2 of the Parallel Thread Execution instruction set architecture (ISA). For details about new instructions (activemask, nanosleep, FP16, and atomics) and deprecated instructions, see Parallel Thread Execution ISA Version 6.2 in the PTX documentation. - IPC functionality is now supported on Windows. - Added P2P write and read bandwidth and latency metrics to the p2pBandwidthLatencyTest sample. - Thrust now uses CUB v1.7.5. - Added some performance optimizations in Thrust for the templated complex type. - Added support for new operating systems. For a list of operating systems supported by CUDA, see the following information in the installation guides: - Windows system requirements - Mac OS X system requirements - Linux system requirements - Changed CUDA_DEVICE_ORDER==FASTEST_FIRST to enumerate GPUs in descending order of performance. - Added a new driver API cuStreamGetCtx to retrieve the context associated with a stream. This API is primarily used by the multidevice cooperative launch runtime API to ensure that the specified function's module is loaded in the right context. Added support for full core dump generation on Linux by using named pipes for MPS-based CUDA applications and CUDA applications that are not based on MPS. Added these new helper APIs for cooperative groups: - `grid_dim()` to get the 3-dimensional grid size - `block_dim()` to get the 3-dimensional block size ### 2.2. CUDA Tools #### 2.2.1. CUDA Compilers - The following compilers are supported as host compilers in `nvcc` - Clang 5.0 - GCC 7.x - Microsoft Visual Studio 2017 (RTW and Update 6) - PGI pgc++ 18.x - XLC 13.1.6 - `__device__ / __constant__` variables are now allowed to have an `rvalue` reference type. - Functions in `math_functions.hpp` have been changed to use `memcpy` for type punning. - Added support for `std::tuple`. - Enabled `pgcc` to include some CUDA header files by defining CUDA-specific macros with GNU-style attributes. #### 2.2.2. CUDA Profiler - Added new utilization and count metrics for Tensor Cores on GPUs based on the Volta architecture. - Added CLI options for `nvprof --trace <gpu,api>` to show trace and profile information in the same output. - Visual Profiler now includes a summary view to show the memory hierarchy. #### 2.2.3. CUDA Profiling Tools Interface (CUPTI) For information about new features such as PCIe topology, new metrics, and new profiling scope in CUPTI, see Changelog in the CUPTI documentation. - Added support in CUPTI to allow `hwpm_active_warps` and `hwpm_active_cycles` counters to be collected in a single pass. - Added support for the NVTX v3 interface #### 2.2.4. CUDA-GDB - CUDA now supports lightweight core dumps. 2.2.5. CUDA-MEMCHECK For new features in CUDA-MEMCHECK, see Release Notes in the CUDA-MEMCHECK documentation. 2.3. CUDA Libraries 2.3.1. cuBLAS Library - Improved performance for a range of small and large tile size matrices that are extensively used in RNN based speech and NLP models, Convolutional seq2seq (Fairseq) models, OpenAI research and other emerging DL applications. These sizes are optimized on the Tesla V100 architecture to deliver enhanced out-of-the-box performance. - Added GEMM API logging for developers to trace the algorithm and dataset used during the last BLAS API call. - Improved GEMM performance on Tesla V100 for single and double precision inputs. 2.3.2. NVIDIA Performance Primitives (NPP) - Added support for NV12-to-RGB format conversion, which is important for deep learning because the decoder output format is NV12 and the typical input format for networks is RGB. - Added primitives to convert real-valued images to complex-valued images and vice versa, for single-channeled images. - Added a new NPP sample under CUDA samples called boundSegmentsNPP. 2.3.3. cuFFT Library - Improved the performance for prime factor FFT sizes with fused bluestein kernels. - A new memory-usage API provides an optional minimal work area policy setting that allows: - Transforms of type C2C to be supported with sizes of up to 4096 in any dimension - Transforms of type Z2Z to be supported with sizes of up to 2048 in any dimension - Provided a new static library that supports only standard cuFFT APIs, that is, without the callbacks. Supporting standard only cuFFT APIs removes the dependency on the CUDA compiler and callback functionality for certain deployments. 2.3.4. cuSOLVER Library - Added the following sparse matrix reordering options: - A zero-free diagonal reordering option to permute rows of a sparse matrix such that there are no zeroes on diagonals after reordering. - An option for matrix reordering by using the METIS library. This option typically delivers smaller zero fill-in than nested dissection during factorization. 2.3.5. cuSPARSE Library - Significantly improved the performance of the merge-path-based sparse matrix-vector multiplication routines \((\text{csrmv}_\text{mp} \text{ and } \text{csrmv}_\text{Ex})\). - Added a new triangular solver \((\text{csrsm}_\text{2})\) that provides the same functionality as the existing \((\text{csrsv}_\text{2})\) but extends support for multiple right-hand-side vectors. - Added a batched pentadiagonal solver that supports 5-vector matrices and interleaved data layouts. This solver is intended for large batches (thousands) of small matrices (size in the hundreds). 2.3.6. Thrust Library - CUB 1.7.5 has been integrated as a device back end for Thrust. Chapter 3. DEPRECATED FEATURES The following features are deprecated in the current release of the CUDA software. The features still work in the current release, but their documentation may have been removed, and they will become officially unsupported in a future release. We recommend that developers employ alternative solutions to these features in their software. General CUDA - The execution control APIs in CUDA will be removed in the next release of CUDA and will no longer be available. These APIs are as follows: - cudaConfigureCall - cudaLaunch - cudaSetupArgument - The NVIDIA Video Decoder (NVCUVID) is deprecated. Instead, use the NVIDIA Video Codec SDK. As of CUDA 9.2, the following files are still available under the CUDA installation directory (for example, for Linux, this directory may be /usr/local/cuda/include). These files will be removed in the next release of the CUDA Toolkit: - dynlink_cuviddec.h - dynlink_nvcuvid.h - dynlink_cuda.h - dynlink_cuda_cuda.h - Windows nvcuvid static library: \lib\x64\nvcuvid.lib - The --use-local-env option no longer requires --cl-version and --cl-version is now ignored. With this change, nvcc detects the Microsoft Visual Studio compiler version from the local environment without relying on --cl-version. - In the next release of CUDA, Microsoft Visual Studio 2010 will no longer be supported. - Starting with R396, the OpenCL ICD loader version will be reported as 2.2 instead of 2.0. Note that there is no change in the OpenCL version (1.2) supported by NVIDIA. - Starting with R396, the Fermi architecture (sm_2x) is no longer supported. CUDA Libraries - Since CUDA 5.0, the cuBLAS library has supported the ability to call the same cuBLAS APIs from within device routines, i.e. kernels. These routines are implemented using the Dynamic Parallelism feature, which is available starting with the Kepler generation of GPUs. The device library (cublas_device) that enables this feature, is deprecated in this release and will be dropped starting next release. NOTE: none of the main cuBLAS library functionality and the APIs that can be called from the host, is impacted. 4.1. General CUDA - Fixed incorrect memory access issues when oceanFFT is running on GPUs based on the Volta architecture. - The macros in cooperative groups `cg_assert()` and `die()` have been renamed to `_CG_ASSERT()` and `_CG_ABORT()`. - Fixed a crash with the simpleIPC CUDA sample on 16-GPU systems. - Fixed an issue in NVML to allow users to set application clocks by using `nvidia-smi (nvidia-smi -ac)` without requiring root privileges on GPUs based on the Pascal and later architectures. - Improved the performance of the PTX JIT cache in a multiprocess environment. See the CUDA documentation about JIT cache management for more information. - Fixed a bug in the CUDA runtime that caused a `pthread_mutex` hang on the POWER platform when running some CUDA applications. - Fixed a bug in the CUDA memory allocator when using `cudaDeviceSetLimit()` that could result in heap corruption. - Fixed a bug in the `shfl_scan` CUDA sample code when converting unsigned `int` to `uchar4`. - In R396, removed `nv_flush_caches()` for recent kernels (2.6.25 and greater), which implement cache flush in `pageattr.c`. - Fixed a bug where the CUDA samples would not load when multiple versions of Microsoft Visual Studio are installed on the system. - In R396, fixed `nvmlDeviceGetTopologyCommonAncestor` to return `NVML_ERROR_NOT_SUPPORTED` instead of `NVML_ERROR_UNKNOWN` for GPUs that do not support this API. 4.2. CUDA Tools 4.2.1. CUDA Compilers - Fixed an issue in the CUDA compiler, where in some cases, mixing `shfl` and certain carry operations on `sm_70` produces incorrect code. - Fixed an issue in the CUDA compiler with incorrect constant folding in the presence of `mul.wide.u16` instructions. - Fixed a crash in PTXAS compiling certain PTX files that contain debugging information. - Fixed an incompatibility issue with `nvcc` and `glibc` 2.26. - In some cases, when NVVM IR is compiled with `libNVVM` on GCC with debugging information (`-g`), PTXAS may fail with the following message: Parsing error near `-`: syntax error. - Fixed a crash in PTXAS when a user-defined label is present at the start of a function. - Fixed a performance issue by tuning the CUDA compiler’s heuristics for application code that may contain a large number of switch statements. - Fixed an issue in the CUDA compiler to significantly reduce the compilation time for certain kernels that include complicated array element access patterns. - The explicit instantiation definition directive for a `__global__` function template is now supported. - Fixed an issue in the CUDA compiler related to incorrect parameter pack expansion. - The CUDA compiler previously incorrectly determined that the constructor for a `__shared__` multidimensional array variable was non-empty in some scenarios, and generated a spurious diagnostic. The bug has now been fixed. 4.2.2. CUDA Profiler - Fixed an issue in `nvprof` where the `--trace api` option does not print the API calls when the `--metrics` option or the `--events` option is also specified. - The NVLink topology diagram available in the Visual Profiler may be garbled and the rectangles representing the CPUs and GPUs may be overlapped. You can manually select and rearrange the processor rectangles to get a better layout. - Fixed an issue in the Visual Profiler when no events or metrics are collected when profiling on a remote system. 4.2.3. CUDA Profiling Tools Interface (CUPTI) - Fixed an issue with incorrect reporting of the half precision functional unit utilization (`hp_fu_utilization`) metric in CUPTI. 4.2.4. CUDA-GDB - Fixed an issue in CUDA-GDB to where `info float` would trigger an assert inside a CUDA stack frame. - Fixed an issue with CUDA-GDB where in some cases, continuing from a deleted breakpoint would result in an error on GPUs based on the Volta architecture. - Fixed an issue with CUDA-GDB where it would crash with an OpenMP 4.5 offload program compiled with the Clang compiler. 4.2.5. CUDA-MEMCHECK - Fixed an issue with CUDA-MEMCHECK where it did not correctly detect illegal memory accesses on GPUs based on the Volta architecture. 4.3. CUDA Libraries 4.3.1. cuBLAS Library - Fixed an issue with a cuBLAS malfunction for specific `int8` row-major GEMM sizes, which resulted in incorrect results. - Fixed an incorrect data type for `const float` used in batched GEMM APIs from `const float* foo[]` to `const float* const foo[]`. This fix enables users to bind a pointer of type `float**` or `float*[]` to the argument. - Fixed the cuBLAS code sample "Application Using C and CUBLAS: 0-based Indexing" that was cut off in the PDF version of cuBLAS Library User Guide. 4.3.2. NVIDIA Performance Primitives (NPP) - Fixed a functional correctness issue for the following NPP routines - `nppiDilate_8u_C1R` - `nppiDilate_16u_C1R` Chapter 5. KNOWN ISSUES 5.1. General CUDA - The driver that is supplied with CUDA 9.2 (R396) has known issues with the upcoming Windows 10 RS4 release. Users of Windows 10 RS4 should upgrade to the latest GA driver from nvidia.com. - In some cases on Windows, when CUDA 9.2 is installed with custom installation settings (where all display driver features are disabled), the existing desktop context menu may not show the NVIDIA Display Control Panel any more. Re-install the NVIDIA driver to obtain the control panel. - On systems with Fedora 27, the CUDA Toolkit runfile installer may fail to install without the elfutils-libelf-devel package installed. Install the missing package or install the dkms package to complete the installation of the CUDA Toolkit. - For warp matrix functions in this release, all threads in a warp must call the same load_matrix_sync() function at the same source line, otherwise the code execution is likely to hang or produce unintended side effects. For example, the following usage is not supported: ```c if (threadIdx.x % 2) { ... load_matrix_sync(...); ... } else { ... load_matrix_sync(...); ... } ``` The same restriction applies to calls to store_matrix_sync() and mma_sync(). 5.2. CUDA Tools 5.2.1. CUDA Compiler - `nvcc` in CUDA 9.2 has a known regression with function-try-blocks (see [except] in ISO C++ standard for the definition of function-try-blocks). In the presence of any function-try-blocks, compilation with `nvcc` aborts with an assertion failure. Function-try-blocks can be replaced with try-blocks in functions to work around this issue unless function-try-blocks are used to catch and handle exceptions thrown by member initializers (see [class.base.init] in ISO C++ standard for the definition of member initializers). For example, compilation of the following code with `nvcc` aborts with an assertion failure: ``` void f() try { /* do something */ } catch (…) { /* handle exception */ } ``` To avoid the failure, rewrite the code as follows: ``` void f() { try { /* do something */ } catch (…) { /* handle exception */ } } ``` This issue will be fixed in the next release. 5.2.2. CUDA Profiler - Event and metric collection is not supported for multidevice cooperative kernels, that is, kernels launched by using the API functions `cudaLaunchCooperativeKernelMultiDevice` or `cuLaunchCooperativeKernelMultiDevice`. - Because of the low resolution of the timer on Windows, the start and end timestamps can be same for activities having short execution duration on Windows. As a result, the Visual Profiler and `nvprof` report the following warning: Found N invalid records in the result. - The source file for unified memory profiling results cannot be opened in the source view if the user is remote profiling on a POWER platform through Visual Profiler. - Running both the analysis and the application in Analysis All fails on TCC. To work around this issue, run each unguided analysis and application analysis individually. 5.2.3. CUDA-MEMCHECK For known issues in CUDA-MEMCHECK, see Known Issues in the CUDA-MEMCHECK documentation. 5.3. CUDA Libraries 5.3.1. cuBLAS Library - The previously documented behavior of cuBLAS allowed the same handle to be used simultaneously from multiple host threads. However, there are multiple known issues with this, including in application crashes in some instances, and performance degradations in other situations. To avoid this issue, each host thread should use a separate cuBLAS handle to call the APIs. The documentation for the cuBLAS library has also been changed to indicate that simultaneous use of the same handle from multiple host threads is disallowed, as the functionality and performance issues will not be addressed. 5.4. CUDA Samples - The NVRTC samples on Mac OS do not link correctly. To work around the issue, modify the linker command in the Makefile to pass `-L/Developer/NVIDIA/CUDA-9.2/lib`. Chapter 6. CUDA TEGRA RELEASE NOTES The release notes for CUDA Tegra contain only information this is specific to the CUDA Tegra Driver and the mobile version of other CUDA components such as compilers, tools, libraries, and samples. The release notes for the desktop version of CUDA in the remaining chapters of this document also apply to CUDA Tegra. On Tegra, the CUDA Toolkit version is 9.2.78. 6.1. New Features CUDA Tegra Driver - Support has been added for Pegasus on Vibrante Linux. - EGL Stream has been enhanced as follows: - Support for additional color formats for EGL streams has been added. - In addition to allowing the release of frames in order, support for out-of-order release of frames has been added. Applications can use this feature to speed up their computational tasks. - GPU work submission latency on Android, L4T, and QNX platforms has been optimized. - Support has been added on Linux for registering host allocation, which enables the use of third-party memory to be processed directly by the GPU. CUDA Tegra Driver API - cudaDevAttrHostRegisterSupported now checks whether the device supports host memory registration through the cudaHostRegister API. The attribute will be set to 1 if the device supports host memory registration (beyond Xavier with kernel driver and OS support) and 0 otherwise. 6.2. Known Issues and Limitations CUDA Tegra Driver - Starting from CUDA 9.2, 32-bit support will no longer be available. During initialization, the driver reserves a large amount of CPU virtual address (VA) for its internal memory management. On QNX, this CPU VA reservation might take a considerable amount of time on systems with large physical memory. Because of this behavior, CUDA initialization might take more time on QNX Xavier compared with earlier releases. NVIDIA is working with its partners to address this issue in upcoming releases. - **cudaHostRegister** on QNX is not supported because of lack of support from the QNX kernels. This functionality will be enabled in future releases. - CUDA allocators cannot make a single allocation greater than 4 GB on Tegra SoC memory. This limitation applies to all allocations on Tegra iGPU and zero-copy memory allocations on Tegra dGPU. To work around this limitation, ensure that applications make multiple allocations and aggregate them to create a large allocation. - The **cudaDeviceGetAttribute** method returns incorrect information (false) for the attribute **cudaDevAttrHostNativeAtomicSupported**. Native atomics have been supported from T194 onwards, but the device attribute is returned incorrectly. **CUDA Profiler** - PC sampling is not supported. - The Volta dGPU (GV100) is not supported. - This release does not support HWPM context switching. That means that counters that are collected through the HWPM counter provider are available at the device level only at this time. This will be fixed in a future release. **CUDA-GDB** - **QNX:** cuda-qnx-gdb may intermittently miss device breakpoints. - **QNX:** The info threads command in cuda-qnx-gdb displays the host threads even when the focus is on the device. - **Linux:** CUDA-GDB may intermittently miss device exceptions. - **Linux:** The set cuda memcheck on command in CUDA-GDB does not have any effect. - **Linux:** CUDA-GDB may intermittently miss device breakpoints in CUDA applications that use the iGPU and the dGPU at the same time. Acknowledgments NVIDIA extends thanks to Professor Mike Giles of Oxford University for providing the initial code for the optimized version of the device implementation of the double-precision \( \text{exp}() \) function found in this release of the CUDA toolkit. NVIDIA acknowledges Scott Gray for his work on small-tile GEMM kernels for Pascal. These kernels were originally developed for OpenAI and included since cuBLAS 8.0.61.2. Notice ALL NVIDIA DESIGN SPECIFICATIONS, REFERENCE BOARDS, FILES, DRAWINGS, DIAGNOSTICS, LISTS, AND OTHER DOCUMENTS (TOGETHER AND SEPARATELY, "MATERIALS") ARE BEING PROVIDED "AS IS." NVIDIA MAKES NO WARRANTIES, EXPRESSED, IMPLIED, STATUTORY, OR OTHERWISE WITH RESPECT TO THE MATERIALS, AND EXPRESSLY DISCLAIMS ALL IMPLIED WARRANTIES OF NONINFRINGEMENT, MERCHANTABILITY, AND FITNESS FOR A PARTICULAR PURPOSE. Information furnished is believed to be accurate and reliable. However, NVIDIA Corporation assumes no responsibility for the consequences of use of such information or for any infringement of patents or other rights of third parties that may result from its use. No license is granted by implication of otherwise under any patent rights of NVIDIA Corporation. Specifications mentioned in this publication are subject to change without notice. This publication supersedes and replaces all other information previously supplied. NVIDIA Corporation products are not authorized as critical components in life support devices or systems without express written approval of NVIDIA Corporation. Trademarks NVIDIA and the NVIDIA logo are trademarks or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated. Copyright © 2007-2018 NVIDIA Corporation. All rights reserved. www.nvidia.com
{"Source-Url": "https://developer.download.nvidia.com/compute/cuda/9.2/Prod/docs/sidebar/CUDA_Toolkit_Release_Notes.pdf", "len_cl100k_base": 7412, "olmocr-version": "0.1.48", "pdf-total-pages": 22, "total-fallback-pages": 0, "total-input-tokens": 42924, "total-output-tokens": 8376, "length": "2e12", "weborganizer": {"__label__adult": 0.0010051727294921875, "__label__art_design": 0.0013742446899414062, "__label__crime_law": 0.0005893707275390625, "__label__education_jobs": 0.00045418739318847656, "__label__entertainment": 0.00034546852111816406, "__label__fashion_beauty": 0.0005578994750976562, "__label__finance_business": 0.0003709793090820313, "__label__food_dining": 0.0005402565002441406, "__label__games": 0.005023956298828125, "__label__hardware": 0.08502197265625, "__label__health": 0.00061798095703125, "__label__history": 0.0005316734313964844, "__label__home_hobbies": 0.0002636909484863281, "__label__industrial": 0.001697540283203125, "__label__literature": 0.000518798828125, "__label__politics": 0.00042366981506347656, "__label__religion": 0.001560211181640625, "__label__science_tech": 0.120849609375, "__label__social_life": 8.285045623779297e-05, "__label__software": 0.0355224609375, "__label__software_dev": 0.740234375, "__label__sports_fitness": 0.0008878707885742188, "__label__transportation": 0.0010423660278320312, "__label__travel": 0.0003390312194824219}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33084, 0.03126]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33084, 0.08446]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33084, 0.77992]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 4393, false], [4393, 4517, null], [4517, 4517, null], [4517, 6102, null], [6102, 8653, null], [8653, 10813, null], [10813, 12494, null], [12494, 14105, null], [14105, 15886, null], [15886, 16869, null], [16869, 18499, null], [18499, 19031, null], [19031, 20440, null], [20440, 22588, null], [22588, 23842, null], [23842, 25087, null], [25087, 27174, null], [27174, 27803, null], [27803, 29271, null], [29271, 31225, null], [31225, 33084, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 4393, true], [4393, 4517, null], [4517, 4517, null], [4517, 6102, null], [6102, 8653, null], [8653, 10813, null], [10813, 12494, null], [12494, 14105, null], [14105, 15886, null], [15886, 16869, null], [16869, 18499, null], [18499, 19031, null], [19031, 20440, null], [20440, 22588, null], [22588, 23842, null], [23842, 25087, null], [25087, 27174, null], [27174, 27803, null], [27803, 29271, null], [29271, 31225, null], [31225, 33084, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 33084, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33084, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33084, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33084, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33084, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33084, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33084, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33084, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33084, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33084, null]], "pdf_page_numbers": [[0, 0, 1], [0, 4393, 2], [4393, 4517, 3], [4517, 4517, 4], [4517, 6102, 5], [6102, 8653, 6], [8653, 10813, 7], [10813, 12494, 8], [12494, 14105, 9], [14105, 15886, 10], [15886, 16869, 11], [16869, 18499, 12], [18499, 19031, 13], [19031, 20440, 14], [20440, 22588, 15], [22588, 23842, 16], [23842, 25087, 17], [25087, 27174, 18], [27174, 27803, 19], [27803, 29271, 20], [29271, 31225, 21], [31225, 33084, 22]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33084, 0.03235]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
cf0ecb11879bff8e46ffafbd7001f36331211624
1 Introduction In this lecture we revisit the general description of quicksort from last lecture\(^1\) and develop an imperative implementation of it in C0. As usual, contracts and loop invariants will bridge the gap between the abstract idea of the algorithm and its implementation. 2 The Quicksort Algorithm Quicksort again uses the technique of divide-and-conquer. We proceed as follows: 1. Pick an arbitrary element of the array (the *pivot*). 2. Divide the array into two subarrays, those that are smaller and those that are greater (the *partition* phase). 3. Recursively sort the subarrays. 4. Put the pivot in the middle, between the two sorted subarrays to obtain the final sorted array. In merge sort, it was easy to divide the input (we just picked the midpoint), but it was expensive to merge the results of the sorting the left and right subarrays. In quicksort, dividing the problem into subproblems could be \(^1\)omitted from the lecture notes there computationally expensive (as we analyze partitioning below), but putting the results back together is immediate. This kind of trade-off is frequent in algorithm design. Let us analyze the asymptotic complexity of the partitioning phase of the algorithm. Say we have the array \{3, 1, 4, 4, 7, 2, 8\} and we pick 3 as our pivot. Then we have to compare each element of this (unsorted!) array to the pivot to obtain a partition such as \[ lt = \{2, 1\}, \ pivot = 3, \ geq = \{4, 7, 8, 4\} \] We have picked an arbitrary order for the elements in the subarrays; all that matters is that all smaller ones are to the left of the pivot and all larger ones are to the right. Since we have to compare each element to the pivot, but otherwise just collect the elements, it seems that the partition phase of the algorithm should have complexity \(O(k)\), where \(k\) is the length of the array segment we have to partition. How many recursive calls do we have in the worst case, and how long are the subarrays? In the worst case, we always pick either the smallest or largest element in the array so that one side of the partition will be empty, and the other has all elements except for the pivot itself. In the example above, the recursive calls might proceed as follows: <table> <thead> <tr> <th>call</th> <th>pivot</th> </tr> </thead> <tbody> <tr> <td>(qsort({3, 1, 4, 4, 7, 2, 8}))</td> <td>1</td> </tr> <tr> <td>(qsort({3, 4, 4, 7, 2, 8}))</td> <td>2</td> </tr> <tr> <td>(qsort({3, 4, 4, 7, 8}))</td> <td>3</td> </tr> <tr> <td>(qsort({4, 4, 7, 8}))</td> <td>4</td> </tr> <tr> <td>(qsort({4, 7, 8}))</td> <td>4</td> </tr> <tr> <td>(qsort({7, 8}))</td> <td>7</td> </tr> <tr> <td>(qsort({8}))</td> <td></td> </tr> </tbody> </table> All other recursive calls are with the empty array segment, since we never have any elements less than the pivot. We see that in the worst case there are \(n - 1\) significant recursive calls for an array of size \(n\). The \(k\)th recursive call has to sort a subarray of size \(k\), which proceeds by partitioning, requiring \(O(k)\) comparisons. This means that, overall, for some constant $c$ we have $$c \sum_{i=0}^{n-1} i = c \frac{n(n-1)}{2} \in O(n^2)$$ comparisons. Here we used the fact that $O(p(n))$ for a polynomial $p(n)$ is always equal to the $O(n^k)$ where $k$ is the leading exponent of the polynomial. This is because the largest exponent of a polynomial will eventually dominate the function, and big-O notation ignores constant coefficients. So quicksort has quadratic complexity in the worst case. How can we mitigate this? If we always picked the median among the elements in the subarray we are trying to sort, then half the elements would be less and half the elements would be greater. So in this case there would be only $\log(n)$ recursive calls, where at each layer we have to do a total amount of $n$ comparisons, yielding an asymptotic complexity of $O(n \cdot \log(n))$. Unfortunately, it is not so easy to compute the median to obtain the optimal partitioning. It turns out that if we pick a random element, it will be on average close enough to the median that the expected running time of the algorithm is still $O(n \cdot \log(n))$. We really should make this selection randomly. With a fixed-pick strategy, there may be simple inputs on which the algorithm takes $O(n^2)$ steps. For example, if we always pick the first element, then if we supply an array that is already sorted, quicksort will take $O(n^2)$ steps (and similarly if it is “almost” sorted with a few exceptions)! If we pick the pivot randomly each time, the kind of array we get does not matter: the expected running time is always the same, namely $O(n \cdot \log(n))$. This is an important use of randomness to obtain a reliable average case behavior. ### 3 The qsort Function We now turn our attention to developing an imperative implementation of quicksort, following our high-level description. We implement quicksort in the function `qsort` as an in-place sorting function that modifies a given array instead of creating a new one. It therefore returns no value, which is expressed by giving a return type of `void`. ```c void qsort(int[] A, int lower, int upper) //@requires 0 <= lower && lower <= upper && upper <= \length(A); //@ensures is_sorted(A, lower, upper); { ``` **Lecture Notes** **February 3, 2011** We sort the segment $A[\text{lower}..\text{upper})$ of the array between $\text{lower}$ (inclusively) and $\text{upper}$ (exclusively). The precondition in the @requires annotation verifies that the bounds are meaningful with respect to $A$. The postcondition in the @ensures clause guarantees that the given segment is sorted when the function returns. It does not express that the output is a permutation of the input, which is required to hold but is not formally expressed in the contract (see Exercise 1). Before we start the body of the function, we should consider how to terminate the recursion. We don’t have to do anything if we have an array segment with 0 or 1 elements. So we just return if $\text{upper} - \text{lower} \leq 1$. ```c void qsort(int[] A, int lower, int upper) //@requires 0 <= lower && lower <= upper && upper <= \length(A); //@ensures is_sorted(A, lower, upper); { if (upper-lower <= 1) return; ... } ``` Next we have to call a partition function. We want partitioning to be done in place, modifying the array $A$. Still, partitioning needs to return the index $i$ of the pivot element because we then have to recursively sort the two subsegments to the left and right of the where the pivot is after partitioning. So we declare: ```c int partition(int[] A, int lower, int upper) //@requires 0 <= lower && lower < upper && upper <= \length(A); //@ensures lower <= \result && \result < upper; //@ensures gt(A[\result], A, lower, \result); //@ensures leq(A[\result], A, \result+1, upper); ; ``` Here we use the auxiliary functions $\text{gt}$ (for greater than) and $\text{leq}$ (for less or equal), where - $\text{gt}(x, A, \text{lower}, i)$ if $x > y$ for every $y$ in $A[\text{lower}..i)$. - $\text{leq}(x, A, i+1, \text{upper})$ if $x \leq y$ for every $y$ in $A[i+1..\text{upper})$. Lecture Notes February 3, 2011 Their definitions can be found in the `qsort.c0` file on the course web pages. Some details on this specification: we require \( \text{lower} < \text{upper} \) because if they were equal, then the segment could be empty and we cannot possibly pick a pivot element or return its index. We ensure that \( \text{result} < \text{upper} \) so that the index of the pivot is a legal index in the segment \( A[\text{lower}..\text{upper}) \). Now we can fill in the remainder of the main sorting function. ```c void qsort(int[] A, int lower, int upper) //@requires 0 <= lower && lower <= upper && upper <= \length(A); //@ensures is_sorted(A, lower, upper); { if (upper-lower <= 1) return; int i = partition(A, lower, upper); qsort(A, lower, i); qsort(A, i+1, upper); return; } ``` It is a simple but instructive exercise to reason about this program, using only the contract for `partition` together with the preconditions for `qsort` (see Exercise 2). To show that the `qsort` function terminates, we have to show the array segment becomes strictly smaller in each recursive call. First, \( i - \text{lower} < \text{upper} - \text{lower} \) since \( i < \text{upper} \) by the postcondition for `partition`. Second, \( \text{upper} - (i + 1) < \text{upper} - \text{lower} \) because \( i + 1 > \text{lower} \), also by the postcondition for `partition`. ### 4 Partitioning The trickiest aspect of quicksort is the partitioning step, in particular since we want to perform this operation in place. Once we have determined the pivot element, we want to divide the array segment into four different subsegments as illustrated in this diagram. We fix lower and upper as they are when partition is called. The segment $A[lower..left)$ contains elements known to be less than the pivot, the segment $A[left..right)$ contains elements greater or equal to the pivot, and the element at $A[upper-1]$ is the pivot itself. The segment from $A[right..upper-1)$ has not yet been scanned, so we don’t know yet how these elements compare to the pivot. We proceed by comparing $A[right]$ with the pivot. In this particular example, we see that $A[right] < pivot$. In this case we swap the element with the element at $A[left]$ and advance both left and right, resulting in the following situation: The other possibility is that $A[right] \geq pivot$. In that case we can just advance the right index by one and maintain the invariants without swapping any elements. The resulting situation would be the following. ![Diagram showing the partitioning process in Quicksort] When `right` reaches `upper` − 1, the situation will look as follows: ![Diagram showing the partitioning process after `right` reaches `upper` − 1] We can now just swap the pivot with \( A[left] \), which is known to be greater or equal to the pivot. ![Diagram showing the pivot swapped with \( A[left] \)] The resulting array segment has been partitioned, and we return `left` as the index of the pivot element. Throughout this process, we have only ever swapped two elements of the array. This guarantees that the array segment after partitioning is a permutation of the segment before. However, we did not consider how to start this algorithm. We begin by picking a random element as the pivot and then swapping it with the last element in the segment. We then initialize \( \text{left} \) and \( \text{right} \) to \( \text{lower} \). We then have the situation where the two segments with smaller and greater elements than the pivot are still empty. In this case (where \( \text{left} = \text{right} \)), if \( A[\text{right}] \geq \text{pivot} \) then we can increment \( \text{right} \) as before, preserving the invariants for the segments. However, if \( A[\text{left}] < \text{pivot} \), swapping \( A[\text{left}] \) with \( A[\text{right}] \) has no effect. Fortunately, incrementing both \( \text{left} \) and \( \text{right} \) preserves the invariant since the element we just checked is indeed less than the pivot. If \( \text{left} \) and \( \text{right} \) ever separate, we are back to the generic situation we dis- cussed at the beginning. In this example, this happens in the next step. If \( \text{left} \) and right always stay the same, all elements in the array segment are strictly less than the pivot, excepting only the pivot itself. In that case, too, swapping \( A[\text{left}] \) and \( A[\text{right}] \) has no effect and we return \( \text{left} = \text{upper} - 1 \) as the correct index for the pivot after partitioning. **Implementing Partitioning** Now that we understand the algorithm and its correctness proof, it remains to turn these insights into code. We start by computing the index of the pivot and move the pivot to \( A[\text{upper} - 1] \). To keep the code simple, we take the midpoint of the segment instead of randomly selecting one. This will work well if the array is random, or if it is almost sorted. ```c int partition(int[] A, int lower, int upper) //@requires 0 <= lower && lower < upper && upper <= \length(A); //@ensures lower <= \result && \result < upper; //@ensures gt(A[\result], A, lower, \result); //@ensures leq(A[\result], A, \result+1, upper); { int pivot_index = lower+(upper-lower)/2; int pivot = A[pivot_index]; swap(A, pivot_index, upper-1); ... } ``` At this point we initialize \( \text{left} \) and \( \text{right} \) to \( \text{lower} \). We scan the array using the index \( \text{right} \) until it reaches \( \text{upper} - 1 \). int pivot_index = lower+(upper-lower)/2; int pivot = A[pivot_index]; swap(A, pivot_index, upper-1); int left = lower; int right = lower; while (right < upper-1) { ... } Next, we should turn the observations about the state of the algorithm made in the preceding section into loop invariants. The zeroth one just records the relative position of the indices into the array. The first one states that the pivot is strictly greater than any element in the segment \( A[lower..left) \). The second states the the pivot is less or equal any element in the segment \( A[left..right) \). The third one expresses that the pivot is stored at \( A[upper-1] \) swap(A, pivot_index, upper-1); int left = lower; int right = lower; while (right < upper-1) {//@loop_invariant lower <= left && left <= right && right < upper; //@loop_invariant gt(pivot, A, lower, left); //@loop_invariant leq(pivot, A, left, right); //@loop_invariant pivot == A[upper-1]; { ... } It is easy to verify that the invariants are satisfied initially, given that we also know \( lower < upper \) from the function precondition. In the body of the loop we compare the pivot with \( A[right] \) and, in each case, take the appropriate actions described in the previous section. ```c while (right < upper-1) //@loop_invariant lower <= left && left <= right && right < upper; //@loop_invariant gt(pivot, A, lower, left); //@loop_invariant leq(pivot, A, left, right); //@loop_invariant pivot == A[upper-1]; { if (pivot <= A[right]) { right++; } else { swap(A, left, right); left++; right++; } } ``` Again, it is straightforward to check that the loop invariant is preserved, based on the description in the previous section. It is important to distinguish the special case that \( left = right \) when the second invariant (\( \text{leq}(\ldots) \)) is vacuously satisfied. At the end, we swap \( A[left] \) with \( A[upper - 1] \) and return \( left \) as the index of the pivot in the partitioned arrays. The complete code is on the next page. int partition(int[] A, int lower, int upper) //@requires 0 <= lower && lower < upper && upper <= \length(A); //@ensures lower <= \result && \result < upper; //@ensures gt(A[\result], A, lower, \result); //@ensures leq(A[\result], A, \result+1, upper); { int pivot_index = lower+(upper-lower)/2; int pivot = A[pivot_index]; swap(A, pivot_index, upper-1); int left = lower; int right = lower; while (right < upper-1) //@loop_invariant lower <= left && left <= right && right < upper; //@loop_invariant gt(pivot, A, lower, left); //@loop_invariant leq(pivot, A, left, right); //@loop_invariant pivot == A[upper-1]; { if (pivot <= A[right]) { right++; } else { swap(A, left, right); left++; right++; } } swap(A, left, upper-1); return left; } Exercises **Exercise 1** In this exercise we explore strengthening the contracts on in-place sorting functions. 1. Write a function `is_permutation` which checks that one segment of an array is a permutation of another. 2. Extend the specifications of sorting and partitioning to include the permutation property. 3. Discuss any specific difficulties or problems that arise. Assess the outcome. **Exercise 2** Prove that the precondition for `qsort` together with the contract for `partition` implies the postcondition. During this reasoning you may also assume that the contract holds for recursive calls. **Exercise 3** Our implementation of partitioning did not pick a random pivot, but took the middle element. Construct an array with seven elements on which our algorithm will exhibit its worst-case behavior, that is, on each step, one of the partitions is empty.
{"Source-Url": "http://www.cs.cmu.edu/afs/cs.cmu.edu/user/fp/www/courses/15122-s11/lectures/08-qsort.pdf", "len_cl100k_base": 4338, "olmocr-version": "0.1.50", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 30677, "total-output-tokens": 5020, "length": "2e12", "weborganizer": {"__label__adult": 0.00048279762268066406, "__label__art_design": 0.00024044513702392575, "__label__crime_law": 0.0004374980926513672, "__label__education_jobs": 0.0006890296936035156, "__label__entertainment": 6.61015510559082e-05, "__label__fashion_beauty": 0.00018024444580078125, "__label__finance_business": 0.00013649463653564453, "__label__food_dining": 0.0006918907165527344, "__label__games": 0.0010938644409179688, "__label__hardware": 0.0011835098266601562, "__label__health": 0.0006237030029296875, "__label__history": 0.00022971630096435547, "__label__home_hobbies": 0.00011551380157470704, "__label__industrial": 0.00042629241943359375, "__label__literature": 0.0002353191375732422, "__label__politics": 0.0003364086151123047, "__label__religion": 0.0005497932434082031, "__label__science_tech": 0.006839752197265625, "__label__social_life": 9.191036224365234e-05, "__label__software": 0.0022430419921875, "__label__software_dev": 0.98193359375, "__label__sports_fitness": 0.0005068778991699219, "__label__transportation": 0.000640869140625, "__label__travel": 0.00025153160095214844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16478, 0.01585]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16478, 0.48697]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16478, 0.82102]], "google_gemma-3-12b-it_contains_pii": [[0, 975, false], [975, 2933, null], [2933, 5217, null], [5217, 7078, null], [7078, 8694, null], [8694, 9537, null], [9537, 10075, null], [10075, 11201, null], [11201, 12599, null], [12599, 13724, null], [13724, 14736, null], [14736, 15603, null], [15603, 16478, null]], "google_gemma-3-12b-it_is_public_document": [[0, 975, true], [975, 2933, null], [2933, 5217, null], [5217, 7078, null], [7078, 8694, null], [8694, 9537, null], [9537, 10075, null], [10075, 11201, null], [11201, 12599, null], [12599, 13724, null], [13724, 14736, null], [14736, 15603, null], [15603, 16478, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 16478, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 16478, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16478, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16478, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 16478, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16478, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16478, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16478, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 16478, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 16478, null]], "pdf_page_numbers": [[0, 975, 1], [975, 2933, 2], [2933, 5217, 3], [5217, 7078, 4], [7078, 8694, 5], [8694, 9537, 6], [9537, 10075, 7], [10075, 11201, 8], [11201, 12599, 9], [12599, 13724, 10], [13724, 14736, 11], [14736, 15603, 12], [15603, 16478, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16478, 0.04433]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
b710e83bc513713f47b675c7e886f3746bad09cd
Mechanisms and Architectures for Tail-Tolerant System Operations in Cloud Qinghua Lu, Liming Zhu, Xiwei Xu, Len Bass, Shanshan Li, Weishan Zhang, Ning Wang corresponding authors: Qinghua Lu (e-mail: lqh@cup.edu.cn), Liming Zhu (e-mail: lizhu@cup.edu.cn), Xiwei Xu (e-mail: xu@cup.edu.cn), Len Bass (e-mail: bass@cup.edu.cn), Shanshan Li (e-mail: lishanshan@cup.edu.cn), Weishan Zhang (e-mail: zhangweishan@cup.edu.cn), Ning Wang (e-mail: wangning@cup.edu.cn) College of Computer and Communication Engineering, China University of Petroleum Software Systems Research Group, NICTA Abstract Conducting system operations (such as upgrade, reconfiguration, deployment) for large-scale systems in cloud is error prone and complex. These operations rely heavily on unreliable cloud infrastructure APIs to complete. The inherent uncertainties and inevitable errors cause a long-tail in the completion time distribution of operations. In this paper, we propose mechanisms and deployment architecture tactics to tolerate the long-tail. We wrapped cloud provisioning API calls and implemented deployment tactics at the architecture level for system operations. Our initial evaluation shows that the mechanisms and deployment tactics can effectively reduce the long tail. 1. Introduction Conducting system operations (such as upgrade, reconfiguration, deployment) for large-scale modern distributed systems in cloud is error prone and complex [1]. System operations in a cloud are performed through cloud APIs provided by cloud providers. Therefore, the completion time and reliability of these tasks depends on the reliability and performance of API calls. We previously did an empirical study on cloud API issues [4] and observed that a large percentage of the cases reported in the EC2 forum [5] are related to stuck API calls and slow response to API calls. The majority of the API issues are unavoidable timing failures that cannot be reduced in a large-scale system and often exhibit a crash-recovery behavior. Those API timing failures are major causes of the long-tail of the timing distribution of operation tasks. However, existing research on system operation focuses on reducing errors and repair time [2-3] rather than tolerating reduced latency issues. From an architecture perspective, one step of an operation either needs to touch many cloud instances in parallel or results in “deep hierarchical” calls, which means one top-level call leads to another call and another call. If one of these dependent calls is slow to respond in a large-scale system the initial operation will be slow to respond. Such problems are already being observed in typical large-scale fan-out systems. For example, Netflix Hystrix tries to keep timeouts short and to fail fast to avoid cascading timeouts [6]. Jeff Dean’s request hedging technique makes the same request to multiple replicas and uses the results of the first request to respond [7]. We argue that large-scale deployment architecture in cloud is also a fan out and deep hierarchical system from an operational point of view and consequently deployment operations will also exhibit a long tail on their timing distribution. In this paper, we propose a set of mechanisms and deployment architecture tactics to tolerate the long-tail. We implemented our mechanisms as a tail-tolerant wrapper around EC2 APIs which are heavily used in system operation of applications hosted in Amazon cloud. At the architecture level, we implemented the proposed deployment architecture tactics that also reduce the long tails. We evaluate our long-tail tolerant mechanisms and deployment tactics through a set of experiments on AWS infrastructure. Our initial result shows that the mechanisms and deployment architecture tactics can effectively remove the long tail of the timing distribution. The rest of this paper is organized as follows. Section 2 presents the tail-tolerant mechanisms and our tail-tolerant API wrapper. Section 3 discusses the proposed deployment architecture tactics. Section 4 evaluates the proposed solutions. Section 5 covers related work. Section 6 concludes the paper and outlines the future work. 2. Tail-Tolerant Mechanisms and API Wrapper An operation or a set of cloud API calls can be seen as a process or a workflow. Our approach for dealing with timing failures is to adapt exception-handling patterns of workflows [8-9]. We first discuss the workflow exception handling and then we discuss how we wrapped cloud API calls to utilize these patterns. 2.1. Tail-Tolerant Mechanisms The workflow patterns that we are using assume there are six states within the lifecycle of a workflow operation: requested, cancelled, allocated, started, failed and completed. These are represented by the rectangles in Fig. 1. The transitions between the states are a combination of the workflow patterns and our adaptations. Our wrapper around the cloud API calls implements this state diagram. In general, the transitions represent calls to the original cloud APIs and the states represent either decision points or final states. The failed, completed, and cancel states represent final states and the other three represent decision points. The flow through this state diagram begins with an API call request that is intercepted by our wrapper which enters the Requested state. The Requested state may choose to make a normal request or a hedged request. For example, a hedged-request may issue one or more original EC2 API calls to launch instances. The wrapper then goes to the Allocated state. From the Allocated state, the wrapper may also force a complete or a failure depending on the state of operation. The solid arrows depict the state transition of an original EC2 API call during normal execution. The dashed arrows show the mechanisms of dealing a timing failure in a given state. Below we describe how we utilize these patterns in detail. When an API call is being requested: The hedge-request pattern is similar to “hedges requests” idea in Jeff Dean’s paper [7]. For certain operations (e.g. launching multiple VMs), we will issue more requests than we need (e.g. launching/scaling out 12 instead of the 10 we need) and then cancel the remaining immediately after the required number is successfully reached. In alternative-request pattern, an alternative API is requested at the same time as the same time as the original API is requested. When resources are being allocated to one or more original EC2 API calls: Continue-allocate pattern, reallocate pattern, force-fail-a pattern, and force-complete-a pattern can be used when an API request sent to an instance (i.e. a Virtual Machine) fails or is unresponsive. The continue-allocate pattern schedules the request to be sent to the same instance at a future time if the API request fails or there is no response from the cloud infrastructure within a certain time. For example, in our disaster recovery product Yuruware Bolt, we need to move data from one region to another region for backup. One of the steps is to create an EBS volume from a snapshot. If the first “ec2-create-volume” call is failed or stuck, the application sends another “ec2-create-volume” request when a timeout occurs. The reallocate pattern, on the other hand, resends the request to other instances. For example, in Yuruware Bolt, we need two data mover instances in two regions for backup. The EBS volume created from the snapshot is required to be attached to an instance. If the EBS volume is not able to be attached to the instance, the application can attempt to attach the EBS volume to the instance again after several seconds using continue-allocate pattern or attach the EBS volume to the other available instances. The cancel-allocate pattern is used to cancel the volume allocation for the original instance. Both force-fail-a pattern and force-complete-a pattern can be used when an API call has been retried for several times and continues to fail. A default fallback can be used by marking the call a failure or completion. Force-complete-a is useful when the output of the API call can be known from other operations. For example, after an instance is started, in the case that the command ec2-describe-instances does not return any output, the user could try to connect to the instance host. If the instance is accessible, it means the instance is running. Thus, the request to ec2-describe-instances can be force-completed and all subsequent API calls can be triggered. These patterns can be used to deal with unresponsive API calls and slow API responses. After an API call is started: There are three patterns that can be used when an API call is stuck at a state, including reallocate-s pattern cancel-start pattern, and force-fail pattern. As we described earlier, an API call being stuck is a common complaint. In reallocate-s pattern, the application gives up the current request and restarts the API request again in another instance. For example, if an EBS volume cannot be attached to one instance, the application can try to attach it to a different instance within the same availability zone, and cancel the stuck attaching API call at the same time (cancel-start pattern). Another example is that, the application can ignore the current request and resend the API request again to cloud infrastructure. For example, if an instance is stuck at initializing, the application can relaunch an instance. In force-fail pattern, if an API call is stuck at the state for a certain time, it is regarded as a failed API call and no subsequent calls are triggered. This pattern is similar to force-fail-r. 2.2. API Wrapper We implemented some of our mechanisms discussed in the previous section as a tail-tolerant API wrapper around Amazon EC2 APIs. The initial API wrapper only wraps around five API calls: launch an instance, start an instance, stop an instance, attach a volume and detach a volume. These five EC2 API are the most frequently used and having significant latency issues according to our own experience and early empirical study of AWS developer forum [5]. We built a timing profile for each API call and resort to other means immediately after waiting time reaching a configurable 90 percentile of historical return time. Launch-instance: The API wrapper implements the hedge-request pattern, which launches two instances through making two launch-instance API calls simultaneously when it receives a request. If one instance is launched within the time specified in the time profile of launch-instance, the API wrapper will kill the other one when it is launched. If neither of them launches, then the API wrapper implements continue-request-s pattern, which re-launches another two instances. Start-instance: The API wrapper implements the alternative-request pattern, which starts an instance and launches a new instance using the same image simultaneously, and cancel the one with longer return time. Stop-instance: The API wrapper launches a call to the stop-instance API, waits for the time specified in the time profile of stop instance. If the call is not completed, the API wrapper forces the instance to stop using the API of “force stop”, which implements the force-complete-s pattern. Attach volume: The API wrapper attaches volume to an instance and launches a new instance at the same time by using the alternative-request pattern. The wrapper waits for the time specified in the time profile of attach volume. If the call is not completed, it re-attaches the volume to the newly launched instance. Detach volume: The API wrapper waits for the time specified in the time profile of detach-volume. If not completed, then the API wrapper implements the force-complete pattern, which force-detaches the volume. 3. Deployment Architecture Tactics Large-scale deployment architecture in cloud can be seen as a fan out and deep hierarchical system from an operation point of view. Deployment architectures tactics can remove the long-tail of operation tasks in cloud. Three industry best practices are adapted in our work to reduce the long-tail of operations in cloud. Immutable server: During the provisioning of a service, an instance is first provisioned by launching a virtual machine image and then deployment tools are used to deploy software and configure the service in an on-demand fashion. A significant source of latency issues during a system operation comes from the on-demand phase after the server launches. Also, the longer a VM has been provisioned and running the more likely it is in an unknown state. Immutable server tactics means that operators make an image which contains everything a new version of an application needs. After the image is launched, nothing more is added or allowed to be changed. This can help reduce the tail latency issues during the on-demand phase. Micro services: We break down an application stack or an application into smaller or even micro-services and make each service run on different VMs or lightweight containers. There are a number benefits in terms of reducing tail-latency. First, it significantly reduces the latency-causing-variability among the instances to be operated on. Instances belonging to a group to be operated on are essentially the same. Second, each instance is more lightweight. Third, there is less performance interference due to co-location. For example, traditionally, a single instance may have some 3 stateless services or a single service with the functionality of 3 web services. If operators want to upgrade something in one of the three web services, operators potentially introduce long tail because they have to touch a number of instances (say 100) and some of them will be slow statistically. With micro services, operators can have 100 instances for service 1, 100 instances for service 2 and 100 instances for service 3. For upgrade something, administrators touch 100 instances only and reduce the long tail probability. Redundancy: Redundancy means administrator can run more than the required number of VMs to avoid long-tail operations. For example, if administrators want to upgrade 100 instances, to reduce long tail, during upgrade, administrators launch 103 instances as they expect at least 3 will be slow and unavoidable. 4. Evaluation In this section, we evaluate our long-tail tolerant mechanisms and deployment architecture tactics through experiment. 4.1. Evaluation of Tail-Tolerant Mechanisms and API Wrapper First we evaluate the API wrapper implementing the proposed API tail-tolerant mechanisms. Our experiments ran on AWS EC2. We selected the results of five API calls to report, including launch-instance, start-instance, stop-instance, attach-volume and detach-volume. For each API we wrapped, we measured the return time 1000 times respectively. Since the focus of this paper is long-tail, we removed the API calls that failed with error messages. The calculation of the percentage is still based on 1000 calls. We report the measurement result of start-instance and stop-instance in Fig. 2-Fig. 3. We omit the measurement result of the other three due to length limit. In Fig. 2-Fig. 3, the horizontal axis represents the return time of an API call while the vertical axis represents the percentage of the corresponding return time value among the return time of the total 1000 API calls. We observe that introducing tolerant mechanisms in our API wrapper significantly reduces the long-tail failure rate. The longest return time of our API wrapper is 49s. The measurement result shows that the API wrapper and the original EC2 API have similar distribution of the return time when the return time is less than 49s. However, the original EC2 API has a long tail of the return time till 185s. API wrapper avoids 1.8% original EC2 API calls viewed as long tail (longer than 49s). In Fig 3, 90.0% of original EC2 stop-instance API calls and 96.0% of stop-instance API wrapper return within 21s. 3.7% of stop-instance wrapper distributes from 22s to 59s. While there is 6.9% calls of the original EC2 API calls distributing from 22s to 59s, and the remaining 1.0% original EC2 API calls are long-tail till 176s as the longest return time. 4.2. Evaluation of Deployment Tactics In this section, we evaluate the deployment tactics through automatically upgrading 50 AMP (Apache + MySQL + PHP) stacks by shell scripts. This experiment is running on AWS platform as well. We use Ubuntu Server 12.04.3 LTS as the operation system. The experiment upgrades the AMP stack from Apache 2.0.65, MySQL 5.1.73, and PHP 5.2.17 to Apache 2.2.22, MySQL 5.5.35, and PHP 5.3.10 respectively. We implemented the three deployment tactics, and compared the number of the successful upgraded VMs using different deployment tactic with a baseline, which represents upgrade without any deployment tactics. Below are the detailed four cases in this experiment. 1) Baseline: we upgraded AMP running on 50 VMs to the recent versions directly on the original VMs. 2) Immutable server: we created an image of VM which runs the second version of AMP and launched 50 VMs using the image. Then we terminate the VMs running old versions of AMP. 3) Micro services: we ran Apache and PHP on 50 VMs and ran MySQL on another 50 VMs, then we upgraded them on the original VMs directly. 4) Redundancy: we launched 3 extra VMs with AMPs. After the 3 extra VMs are successfully launched, we started upgrading the 53 VMs with AMPs. We ran each test cases 100 times. We compared the 4 test cases and observed the test results as shown in Fig. 4. The horizontal axis represents the number of VMs being successfully upgraded while the vertical axis represents the percentile of the corresponding VM number. Fig. 4 shows that all the deployment tactics could reduce the failure rate of upgrade. The reduction led by “micro services” and “redundancy” is not as much as the reduction led by “immutable servers”. 4.3. Discussion Our experiments conducted in Section 4.1 show that our API wrapper with API tail-tolerant mechanisms can largely reduce the long-tail of the original EC2 API. Although the probability of long-tail return time is very low, the time of long tail is very long. Sometimes it could be as long as 10 times of most of the return time. Our solutions can significantly reduce the impact of API issues on operations long-tail and improve the reliability of operations in cloud. The experiment results discussed in Section 4.2 show that the proposed deployment tactics could reduce the failure rate of operation tasks in cloud both in parallel and in hierarchical. Through investigating the log produced during upgrade, we found that network problem causes most of the failures. Among the three deployment tactics, “immutable servers” has the largest impact on reducing the failure rate because launching instance prevents the VM failure due to the problem of network connection. “Micro services” reduce the failure rate because each service runs independently and performance inference can be avoided to a certain degree. Users need to be aware: 1) some mechanisms/tactics will incur cost (e.g. micro resources in splitting AMP) which is clear from the configuration about percentage over provisioning or estimated tail-latency size; 2) effectiveness of our solution does depend on how the current operation is designed in terms of parallelism and VM dependency but we are agonistic to it by providing an API-level wrapper and optimization; 3) our solution is at the API wrapper level without requiring users to change their code calling the API. For supporting multiple cloud, we will have separate wrappers for different cloud providers. Our solution is not about a standardized API across cloud, which requires users to change their code. However, it is possible our mechanisms can be across cloud behind the scene. 5. Related Work In cloud systems, runtime operation failures are due to different reasons [14-15], e.g. availability zone outages, hardware errors, overloaded database, operating system crashes [10], software bugs. Cloud infrastructure providers may not fully disclose the causes of outages or cloud infrastructure design for competitive reasons, which makes the study of API issues more important. Microsoft researchers analysed cloud hardware failure and faults [11]: hard disks are the most frequent failed hardware due to its frequent usage and unreliability; 8% of servers in the data center can experience at least one hardware incident a year; if a failure happens, the occurrence rate of another failure in the same server is high. Gill et. al. [12] found that networks in data centers are highly reliable. However, load balancers experience many software faults and network redundancy is not entirely effective. Many of these failures are reflected differently at the API level where the users may not know the underlying causes. A significant portion of the API issues is related to slow API responses. Dean from Google summarized the reasons of slow response API responses [7]: 1) different applications may reside upon one machine and share resources; 2) applications running on different machines may share global resources; 3) background programs may generate latency; 4) various queuing in network switches and intermediate servers may cause latency. Dean believes that resource over-provisioning, real-time engineering of software, and improved reliability can help reduce the causes of API call latency. However, it is impossible to eliminate all API call latency. Therefore, Google proposes two techniques to deal with API call latency [7]: 1) Within-request immediate-response technique that is to issue the request to multiple replicas and use the first responded results; 2) cross-request long-term adaptation technique that is to issue different requests to different partitioned data. At the application deployment level, approaches like [13] were proposed to optimize reliability, latency and energy when application components are deployed onto physical machines. However, the deployment platform involves physical machines where one has full control/visibility rather than infrastructures with specific auto-scaling facilities and failures ranging from individual nodes to entire region. 6. Conclusions In this paper, we proposed tail-tolerant mechanisms and deployment architecture tactics to tolerate long-tail issues of operations in cloud. We implemented our mechanisms as a tail-tolerant wrapper around Amazon cloud APIs which are heavily used in system operations of applications hosted in Amazon cloud. Our initial evaluation shows that the mechanisms and deployment architecture tactics can remove the long tail. 7. Acknowledgements This project is supported by “the Fundamental Research Funds for the Central Universities” (No. 14CX02140A and No.14CX02137A) and “the Scientific Research Foundation of China University of Petroleum” (No. Y1307021). NICTA is funded by the Australian Government as represented by the Department of Broadband, Communications and the Digital Economy and the Australian Research Council through the ICT Centre of Excellence program. 8. References
{"Source-Url": "https://www.usenix.org/system/files/conference/hotcloud14/hotcloud14-lu.pdf", "len_cl100k_base": 4694, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 25454, "total-output-tokens": 5904, "length": "2e12", "weborganizer": {"__label__adult": 0.0003120899200439453, "__label__art_design": 0.0004730224609375, "__label__crime_law": 0.0003371238708496094, "__label__education_jobs": 0.001377105712890625, "__label__entertainment": 0.00011843442916870116, "__label__fashion_beauty": 0.00016391277313232422, "__label__finance_business": 0.0008616447448730469, "__label__food_dining": 0.0003597736358642578, "__label__games": 0.0005450248718261719, "__label__hardware": 0.001605987548828125, "__label__health": 0.0008177757263183594, "__label__history": 0.0003490447998046875, "__label__home_hobbies": 0.00010603666305541992, "__label__industrial": 0.0005469322204589844, "__label__literature": 0.0003764629364013672, "__label__politics": 0.00028586387634277344, "__label__religion": 0.00039315223693847656, "__label__science_tech": 0.2049560546875, "__label__social_life": 0.0001246929168701172, "__label__software": 0.0323486328125, "__label__software_dev": 0.75244140625, "__label__sports_fitness": 0.00022304058074951172, "__label__transportation": 0.0005750656127929688, "__label__travel": 0.0002275705337524414}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25862, 0.03133]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25862, 0.06767]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25862, 0.89606]], "google_gemma-3-12b-it_contains_pii": [[0, 4957, false], [4957, 9917, null], [9917, 15000, null], [15000, 18078, null], [18078, 23187, null], [23187, 25862, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4957, true], [4957, 9917, null], [9917, 15000, null], [15000, 18078, null], [18078, 23187, null], [23187, 25862, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25862, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25862, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25862, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25862, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25862, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25862, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25862, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25862, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25862, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25862, null]], "pdf_page_numbers": [[0, 4957, 1], [4957, 9917, 2], [9917, 15000, 3], [15000, 18078, 4], [18078, 23187, 5], [23187, 25862, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25862, 0.0]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
823597654f56ffe71420648c4d9bb6b77028dfa6
Overview Purpose To provide a standard and robust C-language ARM7 software interface to the Controller Area Network (CAN) busses that form the main interconnect of Ranger’s peripheral nervous system. Problem Statement Ranger’s electronic nervous system essentially consists of two parts, the central and peripheral nervous systems. The peripheral nervous system in turn consists of a number of ARM7 microcontrollers boards, known as satellites, each connected to one or more CAN busses which each terminate at a central ARM7 known as the CAN Router. The CAN Router, an ARM9 microcontroller known as the Main Brain, and a high-speed Serial Peripheral Interface (SPI) bus interconnect between the two, together constitute Ranger’s central nervous system. Given the multitude of ARM7 microcontrollers accessing CAN busses, it is very desirable to have common software shared among all of the satellites for doing so. The main ostensible benefits of such a system are only having to debug the CAN software once and allowing programmers to add new boards and messages to the CAN busses with minimal effort. The conceived requirements for the CAN Module were: 1) Support transferring frames with various mixed data-type payloads. 2) Automatically disseminate data immediately upon receipt. 3) Assemble data from remote locations into complete frames for transmission on demand. 4) Cleanly integrate with the task scheduler in order to schedule CAN transmissions. Description of CAN Bus The CAN Bus is a differential two-wire serial data bus that nominally operates at a raw bit rate of up to 1 MHz, although that has been successfully overclocked to 4 MHz on Ranger. The CAN bus works by transferring frames with payloads of up to eight bytes each. Each frame is transmitted with an identifier, known as the CAN ID, of either 11 bits in standard mode or 29 bits in extended mode. CAN controllers do a substantial amount of work in hardware, including error detection and retransmission, multiple transmitter time-sharing and prioritization based on CAN ID, and frame parsing and filtering. **CAN Module Description** The CAN Module consists of two communicating layers, a frame transfer layer and a frame assembly and distribution layer. The purpose of the frame transfer layer is to send and receive complete frames of data over multiple physical CAN busses, while the purpose of the frame assembly and distribution layer is to collect data from disparate locations on a satellite into complete frames and to disseminate data from complete frames to disparate locations on satellites. **Code Structure** The evolution of the CAN module led to the intermixing of the source code for what became the two layers of module. The code is divided as follows: - **can.h**: Shared header file for all parts of the CAN module. - **can_ring.c**: Implements a reusable variable size ring buffer of CAN_FRAME struct elements. - **can_tx.c**: Implements functions related to transmitting and assembling CAN frames. - **can_rx.c**: Implements functions related to receiving and disseminating CAN frames. - **can_isr.c**: Implements the interrupt service routines used by the CAN module. - **can_types.c**: Implements functions related to the various supported frame layouts. **CAN Frame Ring Buffer** **Concept** There is often need within the CAN module and those it interacts with for reusable ring buffer code for storing CAN frames. A ring buffer uses a contiguous block of memory, commonly known as an array, as its underlying storage area. However, unlike an array, the storage elements of a ring buffer are seen to be arranged in a ring, with no logical ends. This is similar to a queue data structure, but with fixed size. Unlike an array, a ring buffer must keep track of two indices, and input index and an output index. The input index points to the location in the ring containing the newest data, and the output index points to the location of the oldest data. Implementation Initialization The CAN_RING struct must be instantiated and an array of CAN_FRAMES of the desired length must be allocated to serve as the underlying storage field. This CAN_RING instance is then initialized with the following function: ```c void can_ring_init(CAN_RING * ring, CAN_FRAME * frame_buf, int buf_len); ``` The first argument is the address of the ring to initialize, the second is the address of the buffer to use, and the third is the length of the buffer. This initializes this input and output indices, `in_idx` and `out_idx`, to a value of `buf_len - 1`. To be specific, `in_idx` is defined as the index of the most recently input element, and `out_idx` is defined as the index of the most recently output element. Therefore, `in_idx` points to a valid data element when the buffer is not empty, but `out_idx` never points to a valid data element. ```c typdef struct can_ring{ CAN_FRAME * buf; int buf_len; volatile int in_idx; volatile int out_idx; } CAN_RING; ``` Push Inserting an element into a CAN_RING is done through the `can_ring_push` function: ```c int can_ring_push(CAN_RING * ring, CAN_FRAME * frame); ``` This function checks if there is room to add the given frame and returns 1 if there is no free space remaining in the ring. Otherwise, `in_idx` is incremented and rotated if necessary, the frame is inserted into the ring at that location, and 0 is returned. Pop Removing an element from a CAN_RING is done through the `can_ring_pop` function: ```c int can_ring_pop(CAN_RING * ring, CAN_FRAME * frame); ``` This function checks if there is a frame available in the ring and returns 1 if the ring is empty. Otherwise, `out_idx` incremented and rotated, the next available frame is copied from the ring at that location to the frame location given as the second function argument, and 0 is returned. ```c typdef struct can_frame{ CAN_CHANNEL chan; int addr : 11; int dlc : 4; char rtr : 1; CAN_PAYLOAD payload; } CAN_FRAME; ``` Concurrency and Preemption Although the prior sections glossed over it, the CAN_RING is designed to be safe to use between different preemption levels. Specifically, the CAN_RING is designed to such that it is always safe for the input end to be at a different preemption level than the output. However, it is not safe for a single end to be accessed by multiple preemption levels. Worded another way, operations must be atomic relative to other operations of the same type. For example, this means that it is safe to connect a ring between main and interrupt, or between interrupt and fast interrupt levels, but it is not safe to push frames into a single ring from both main and interrupt levels. Transfer Layer Concept The transfer layer is divided into two separate paths, the transmit path and the receive path. To transmit a frame, the user submits a frame to the transfer layer and it is adding to the transmit buffer for the appropriate CAN bus. Asynchronous transmit processes for each bus empty these buffers onto the wire. The user does not directly interact with the transfer layer in order to receive a frame. Each channel can be configured to optional store received frames into a ring buffer or to automatically dispatch frames via the frame distribution layer. Most functions within the transfer layer are written generically so as to apply to any CAN channel. This works well because all of the CAN controllers on the ARM7 processor are identical with the exception of the base address of their registers. Implementation Initialization and Channel Configuration Since the transfer layer’s code handles the CAN controllers in a generic manner, it was necessary to introduce structures to store the configuration and state of each channel. The CAN_RX_CHAN_CFG and CAN_TX_CHAN_CFG structures serve this purpose for the receive and transmit directions, respectively. It was convenient to divide these into two separate structures because the transmit and receive code which contain the arrays of the instances of these structures are in separate source files, can_tx.c and can_rx.c, respectively. For initializing the receive configuration structure, there is the function `can_rx_set_chan_cfg`: ```c void can_rx_set_chan_cfg(CAN_CHANNEL chan, volatile unsigned long * base_addr, CAN_RING * rx_ring, CAN_DISPATCH_MODE mode); ``` The arguments to this function are which channel to configure, the base address of the registers for the hardware CAN controller for that channel, a pointer to a ring for storing frames in manual dispatch mode, and flag selecting between manual dispatch mode (CAN_DISPATCH_MANUAL) or automatic dispatch mode via the distribution layer (CAN_DISPATCH_AUTO). Note that in automatic dispatch mode the ring is unnecessary and can be omitted by replacing it with 0, the null pointer. The receive configuration structure also has another field called `descriptors`. This field is related to the distribution layer and will be described there. For initializing the transmit configuration structure, there is the function `can_tx_set_chan_cfg`: ```c void can_tx_set_chan_cfg(CAN_CHANNEL chan, volatile unsigned long * base_addr, CAN_RING * tx_ring); ``` This is essentially identical to its receive counterpart, except the ring is mandatory in all cases unless transmit functionality is not desired. The integer `stalled` flag will be described later in the Transmit Path section of the Transfer Layer documentation. ### Interrupt Service Routines There are a total of nine interrupt service routines (ISRs) in the transfer layer. Each of the four CAN controllers has one transmit ISR and one receive ISR, and there is one common error handling ISR. Notably, the receive and transmit ISRs do not follow the convention of having a single instance of generic code that applies to all CAN controllers. However, this was necessary due to the behavior of the ARM7’s Vectored Interrupt Controller (VIC). Specifically, upon firing of a vectored interrupt, the VIC looks up the programmed ISR address for that interrupt and calls that function, but gives no other direct indication of what the source of the interrupt was. Therefore, if a single ISR is shared among all controllers then that ISR must manually look up the source of the interrupt, which would be a slow process. However, by having a separate ISR for every interrupt, which ISR is called implies the source of the interrupt, and so therefore no source lookup is required. In order to prevent problems and complications due to code duplication, only the bare minimum of required code is in the transmit and receive ISRs. Instead, they pass off control to generic functions to do the actual work, indicating which channel should be used. The receive ISRs are named `can_rx1_isr`, `can_rx2_isr`, `can_rx3_isr`, and `can_rx4_isr`, with prototypes as follows: ``` __irq void can_rx1_isr(void); ``` The receive ISR is fired whenever a CAN controller receives a frame. Control is passed on to the function `can_rx_now` to continue generic processing. Similarly, the transmit ISRs are named `can_tx1_isr`, `can_tx2_isr`, `can_tx3_isr`, and `can_tx4_isr`, with prototypes as follows: ``` __irq void can_tx1_isr(void); ``` The transmit ISR is fired whenever a CAN controller finishes transmitting a frame. Control is passed on to the function `can_tx_send_next_frame` to continue generic processing. The CAN controller is capable of encountering a number of error states. Practically, the only error of concern is the transmit error counter limit. Whenever the bus encounters a transmit error, it increments the transmit error counter, and upon success it is decremented. When the transmit error counter reaches its limit of 255, the CAN controller is prohibited from transmitting frames until the error is explicitly cleared. To do this, there is the CAN error ISR which is shared by all CAN channels, `can_error_isr`: ``` __irq void can_error_isr(void); ``` The error ISR is fired whenever a CAN error occurs. The ISR then checks each channel to see if it is in a bus-off state. If it is, the bus is reset. While it is possible for transmit errors to occur in normal operation, they are very unlikely to accumulate sufficiently to reach the error limit. However, transmit errors are extremely common during the development process when microcontrollers are being programmed, inserted, and removed from the network, and so automatic error recovery is therefore essential to an efficient development cycle. **Receive Path** As mentioned earlier, the very first event in the receive path is the firing of the CAN channel’s interrupt service routine. This ISR does no work of its own and immediately passes control to the function `can_rx_now`, passing the CAN channel as a function argument. ``` void can_rx_now(CAN_CHANNEL chan); ``` This function then collects the components of the received frame’s data from the CAN controller’s registers and stores it into a `CAN_FRAME` structure instance. Once this frame has been assembled, the controller is told to release the data so that it can receive another frame. Then, if the transmit layer was configured in manual dispatch mode during the initialization step, the frame is pushed into the receive ring buffer; the frame is lost if the ring is full. Otherwise, if automatic dispatch mode is in use, the frame is passed to the first function of the distribution layer, can_rx_dispatch_frame. If desired, users can check the chan field of the frame structure in order to determine which channel a frame was received on. Note that this function, like most generic CAN functions, must do a small amount of work to access CAN registers. Specifically, the absolute address of the desired register must be computed based on the base address of the CAN controller in use. To ease this process, a few macros were defined. First, a list of the relative offsets of all registers in a CAN controller was defined as follows: ```c #define CAN_MOD (0x00) #define CAN_CMR (0x04) #define CAN_GSR (0x08) ... ``` Then, a macro is defined which computes a and correctly casts the address of a register based on a base address and a relative offset as follows: ```c #define CAN_REG(base,offset) \ (*((volatile unsigned long *) (((volatile unsigned char *)base) + \ offset))) ``` After setting this up, registers can be read and written simply as follows: ```c frame.addr = CAN_REG(base,CAN_RID); CAN_REG(base,CAN_CMR) = 1<<2; ``` **Transmit Path** The user initiates the process of transmitting a frame by calling the function can_transmit_frame. ```c int can_transmit_frame(CAN_FRAME * frame); ``` This function takes the given frame, determines which channel it should be transmitted on based on its chan field, and pushes it into the transmit ring buffer for that channel. As mentioned earlier, an asynchronous process empties this ring onto the CAN bus. Associated with this process is the stalled flag stored in the transmit configuration structure. This flag indicates whether or not the process is currently running or if it has stalled out because it ran out of data to transmit. After pushing the new frame onto the ring, this function checks the stall flag. If the other process is currently running and not stalled out, this function exits because the other process will eventually get to the newly added frame. However, if the other process is stalled out then it must be manually restarted. To do this, the function that primarily implements the other process, can_tx_send_next_frame, is called. ```c void can_tx_send_next_frame(CAN_CHANNEL chan); ``` This function pops data off the transmit ring. If no data is available then it sets the stall flag and does not transmit any more data, awaiting restarting by `can_transmit_frame`. Otherwise, the stall flag is cleared and the available frame is written to the transmit registers of the appropriate CAN controller, and then this function exits. This function is then called again by the transmit ISR when the CAN controller is ready to transmit another frame. **Distribution Layer** **Concept** The Distribution Layer assembles data from remote locations on a processor into complete CAN frames, and distributes data from complete frames to such locations. Multiple data type combinations, known as layouts, are supported, and data quantities are accessed through getter and setter functions. Frame descriptor structures record a particular layout and which getter and setter functions are used to populate or disseminate the corresponding data. Data transmission with this system can be easily scheduled with the system’s main task scheduler. Lists of frame descriptors are used for distributing received frames and handing Remote Transmit Request (RTR) frames. **Implementation** **Functions** Rather than reading and writing the memory locations on quantities directly, quantities are accessed through getter and setter functions. The reason for this is twofold. First, there are some situations where it might not be safe to simply read or write a quantity due to a race condition or other problem. In these situations, a wrapper function is can be made to handle these conditions and safely access the quantity in question. Second, many quantities are not transmitted over the wire in the same format as they are used internally on the microcontroller, and conversion between formats can be quite expensive. The use of a wrapper allows conversion between formats only when necessary. Specifically, getter and setter functions are accessed frequently via what are known as function pointers. A getter function pointer is the address of a function that takes no arguments and returns data of the given type, and a setter function pointer is the address of a function that has a void return type and takes a single argument of the given type. **Layouts and Frame Descriptors** CAN frames support payload sizes of up to eight bytes. That space could be divided and used in many different ways. For example, it could be used for a single 64-bit double-word floating point number, two single-word integers, or a single-word integer and two short integers. These different, possibly mixed data type, payload configurations are called payload layouts, and a number of different layouts are supported with the ability to easily add more. Clearly, in order to get data where it needs to go, we most associate getter and setter functions with these layouts. To do this we use frame descriptors in the FRAME_DESC structure. ```c typedef struct can_frame_descriptor { int addr : 11; CAN_CHANNEL chan; char rtr : 1; CAN_LAYOUT frame_layout; CAN_VV_PTR ptr1; CAN_VV_PTR ptr2; CAN_VV_PTR ptr3; CAN_VV_PTR ptr4; CAN_VV_PTR ptr5; CAN_VV_PTR ptr6; CAN_VV_PTR ptr7; CAN_VV_PTR ptr8; } CAN_FRAME_DESC; typedef void (*CAN_VV_PTR)(void); ``` This structure is similar to the CAN_FRAME structure in that it too has addr, chan, and rtr fields of the same meaning. However, new are the frame_layout and ptr1 through ptr8 fields. Currently there are five available layouts as shown above in the CAN_LAYOUT structure, where the suffixes indicate layout contents. D indicates a double word floating point value (8 bytes), F indicates a single word floating point value (4 bytes), I indicates a single word integer value (4 bytes), and S indicates a short word integer value (2 bytes). The ptr# fields are used to store the addresses of the getter and setter functions for use with the given layout. There are eight available fields because the smallest supported type is one byte, leading to eight quantities. These fields are of type void-void function pointer because no single type can match all of the different getter and setter functions used, so void-void was chosen as it is the most generic possible function pointer type. The use of void-void types clearly makes compile time type checking impossible if the user is to simply assign the addresses of their getter and setter functions directly to a frame descriptor. To solve this problem, the user never directly accesses the function pointer fields of the frame descriptor. Instead, for each layout a set of functions will be created to populate frame descriptors for incoming and outgoing frames. This way, the prototype of these population functions can be used to enforce compile-time type checking and keep the user safe. Example function prototypes for the layout CAN_LAYOUT_FI are below. ```c void can_set_tx_descriptor_fi(CAN_FRAME_DESC* frame_desc, int addr, CAN_CHANNEL chan, CAN_TX_GETTER_FLOAT g_f1, CAN_TX_GETTER_INT g_i1); void can_set_rx_descriptor_fi(CAN_FRAME_DESC* frame_desc, int addr, CAN_RX_SETTER_FLOAT s_f1, CAN_RX_SETTER_INT s_i1); ``` **Initialization** First, assure that the Transfer Layer has been initialized correctly. If the distribution layer is to be used for automatically distributing frames upon receipt, ensure the Transfer Layer is set to automatic dispatch mode. Next, if you wish to use receive and distribution functionality, it is necessary to create a null terminated list of frame descriptors which you wish to receive. Similarly, if you wish to use Remote Transmit Request (RTR) functionality, you must create another null terminated list of the frame descriptors which you wish to be available for RTR transmission. Ensure that all of these descriptors are properly initialized with descriptor population functions as mentioned in the prior section. Now, if you are using either of these, inform the CAN module of these lists by calling `can_rx_set_descriptors`. The null pointer, 0, can be used in place of the list address for a feature which is not desired. ```c void can_rx_set_descriptors(CAN_FRAME_DESC ** rx_descriptors, CAN_FRAME_DESC ** rtr_descriptors); ``` This function stores the addresses of these two given lists in `can_rx.c` for later use. Note that the receive descriptor list contains all of the information necessary to configure the CAN controller’s acceptance filter. The acceptance filter is a hardware filter which allows the user to select which CAN IDs make it through for user processing. This can allow massive processor cycle savings by avoiding processing frames for which you are not the intended recipient. However, acceptance filter configuration has been a low priority and has not been implemented yet. However, this is where it would get called from. Note that this only results in higher processor usage in the event of unintended messages, but not incorrect behavior in the normal case. Earlier, it was pointed out that there was another field in the receive channel configuration structure, called descriptors, which should now make sense. However, that would be misleading. The field in that structure is in fact vestigial and should have been deleted. The receive descriptor list was moved from being channel local to being global among all channels for two reasons. First, it was desirable to separate the configuration of the two layers and making the list global accomplished that goal. However, more importantly, adding RTR support was a late-added feature. However, as RTR frames are ultimately transmit oriented, not receive oriented, which channel they transmit out over is defined in a field of the frame descriptor itself, and so therefore it would not make sense for an RTR list to be associated with any particular channel. Therefore, it was seen as a cleaner solution to keep both lists as similar as possible, and that was accomplished by making both lists global. Now, although it is not needed to assemble them into a list, allocate all frame descriptors you wish to transmit with this layer, and populate them with the aforementioned functions as appropriate. **Assembly and Transmission** Transmitting a frame descriptor is done through either the `can_transmit` or `can_transmit_alt` functions. ```c int can_transmit(CAN_FRAME_DESC * fd); int can_transmit_alt(CAN_FRAME_DESC * fd, CAN_CHANNEL chan, char rtr); ``` can_transmit is the original function for transmitting frame descriptors, while can_transmit_alt was added in order to support RTR transmissions and override the chan and rtr fields of a frame descriptor. In practice, can_transmit simply copies the chan and rtr fields from its given frame descriptor and then calls can_transmit_alt with those values to do the real work. This function processes the given frame descriptor based on its layout field, interprets all pointers as getter functions because this is a transmit operation that is collecting data, calls each function, and stores each return value in the correct area of a frame payload. If the frame is RTR then no data is collected. The frame’s chan and rtr fields are then filled in according to the function arguments, and the assembled frame is passed the Transfer Layer’s can_transmit_frame function for transmittal over the bus. In general, tasks do not necessarily control when their data is sent out over the CAN bus. The reason for this is that CAN bandwidth is scarce and must be conserved, so it may not be desirable for tasks to transmit values on every execution cycle. In order to efficiently utilize bandwidth, it is necessary to coordinate the execution of multiple tasks and transmittal of multiple frames. To do this, CAN transmission scheduling is integrated into the processor’s main task execution scheduler. As the task scheduler is based around executing void-void function pointers, CAN transmission must adhere to this scheme as well. Therefore, a frame that is scheduled by the task scheduler must have an associated wrapper function that takes no arguments and returns void. The wrapper can then be inserted into the task scheduler. The primary function of the wrapper’s code body is to call the function can_transmit with the desired frame descriptor. Receipt and Distribution The frame distribution process begins when the Transfer Layer passes a newly received frame to the function can_rx_dispatch_frame. void can_rx_dispatch_frame(CAN_FRAME * frame); First, the function searches the RTR or receive descriptor list for a matching CAN ID, depending on whether the frame is or is not an RTR frame, respectively. If no match is found then the process simply ends and the frame is discarded. If an RTR match is found, the matched frame descriptor is passed to can_transmit such that the RTR request is responded to. Otherwise, when a receive descriptor is matched, the data in the received frame is distributed according to the layout and setter functions in the matched frame descriptor. **Limitations and Future Work** Function performance overhead Global receive descriptor list Mixed layers Evolution – split to two modules Single transmit buffer and high priority transmissions Acceptance filter **Usage Scenarios** The layers of the CAN module are designed such that while they can be used in whole to fulfill their basic usage scenarios, they are also capable of being used in part and configured in alternate manners in order to serve advanced functions. Some possible usage scenarios which have been used in the past are described below. **Bidirectional Transfer** The basic normal use of the transfer layer is to provide bidirectional data transfer with a separate buffer for each direction of each channel in use. **Shared Receive Buffers** It is also possible to configure all channels to use the same receive ring buffer. This is useful in two situations. First, in some situations it is necessary or desirable to know the temporal order of frames arrived in without recording timestamps. By putting them into a single buffer, they are stored in order of arrival. Second, some applications, such as a CAN router, consume all CAN frames in a single location. Storing all frames in a single buffer provides an automatic means of consolidating frames so that they can all be accessed in the same location. This is also useful in the example of the CAN bus probe which allows the user to monitor communications over CAN busses. **Automatic Distribution** The normal usage scenario for the distribution layer is to have data distribution to occur immediately upon receipt. **Delayed Distribution** However, sometimes the data distribution mechanism is desired, but one wants to be able to control the timing of distribution. This can be done by using the Transfer Layer in manual dispatch mode such that received frames go into a ring. Then, simply pop frames off that ring and pass them to `can_rx_dispatch_now` when you want distribution to occur. This behavior was useful, for example, when an ARM7 was used as the main brain and we wanted to avoid race conditions without mirroring all of our data to allow for asynchronous modifications in response to events on the CAN bus.
{"Source-Url": "http://ruina.tam.cornell.edu/research/topics/locomotion_and_robotics/ranger/ranger_paper/Reports/Ranger_Robot/software/Craig_Thomas_CAN_Module_Documentation_2009.pdf", "len_cl100k_base": 5924, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 30109, "total-output-tokens": 6582, "length": "2e12", "weborganizer": {"__label__adult": 0.0005402565002441406, "__label__art_design": 0.00047206878662109375, "__label__crime_law": 0.0004730224609375, "__label__education_jobs": 0.0003330707550048828, "__label__entertainment": 0.00010085105895996094, "__label__fashion_beauty": 0.0002281665802001953, "__label__finance_business": 0.00025653839111328125, "__label__food_dining": 0.0005235671997070312, "__label__games": 0.0010223388671875, "__label__hardware": 0.029296875, "__label__health": 0.0005202293395996094, "__label__history": 0.0003192424774169922, "__label__home_hobbies": 0.00023174285888671875, "__label__industrial": 0.00141143798828125, "__label__literature": 0.00016939640045166016, "__label__politics": 0.0002734661102294922, "__label__religion": 0.0006384849548339844, "__label__science_tech": 0.0682373046875, "__label__social_life": 5.984306335449219e-05, "__label__software": 0.01100921630859375, "__label__software_dev": 0.8818359375, "__label__sports_fitness": 0.0005450248718261719, "__label__transportation": 0.0013322830200195312, "__label__travel": 0.0002796649932861328}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28841, 0.00669]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28841, 0.48602]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28841, 0.90591]], "google_gemma-3-12b-it_contains_pii": [[0, 1884, false], [1884, 3968, null], [3968, 5977, null], [5977, 8097, null], [8097, 10490, null], [10490, 13024, null], [13024, 15622, null], [15622, 18364, null], [18364, 20780, null], [20780, 24047, null], [24047, 26628, null], [26628, 28642, null], [28642, 28841, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1884, true], [1884, 3968, null], [3968, 5977, null], [5977, 8097, null], [8097, 10490, null], [10490, 13024, null], [13024, 15622, null], [15622, 18364, null], [18364, 20780, null], [20780, 24047, null], [24047, 26628, null], [26628, 28642, null], [28642, 28841, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 28841, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28841, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28841, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28841, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28841, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28841, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28841, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28841, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28841, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28841, null]], "pdf_page_numbers": [[0, 1884, 1], [1884, 3968, 2], [3968, 5977, 3], [5977, 8097, 4], [8097, 10490, 5], [10490, 13024, 6], [13024, 15622, 7], [15622, 18364, 8], [18364, 20780, 9], [20780, 24047, 10], [24047, 26628, 11], [26628, 28642, 12], [28642, 28841, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28841, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
44c42e523b1077d0c1542562ee28f7d02848acc8
This is the accepted version of a paper presented at 5th USENIX Workshop on Hot Topics in Security (HotSec 2010). Citation for the original published paper: Moving from logical sharing of guest OS to physical sharing of deduplication on virtual machine. In: Proc. 5th USENIX Workshop on Hot Topics in Security (HotSec 2010) N.B. When citing this work, cite the original published paper. Permanent link to this version: http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-199140 Moving from Logical Sharing of Guest OS to Physical Sharing of Deduplication on Virtual Machine Kuniyasu Suzaki† Toshiki Yagi† Kengo Iijima† Nguyen Anh Quynh† Cyrille Artho† Yoshihito Watanebe‡ † National Institute of Advanced Industrial Science and Technology ‡ Alpha Systems Inc. Abstract Current OSes include many logical sharing techniques (shared library, symbolic link, etc.) on memory and storage. Unfortunately they cause security and management problems which come from the dynamic management of logical sharing; e.g., search path replacement attack, GOT (Global Offset Table) overwrite attack, Dependency Hell, etc. This paper proposes that self-contained binaries eliminate the problems caused by logical sharing. The memory and storage overheads caused by self-contained binaries are mitigated by physical sharing (memory and disk deduplication). The effect of deduplication was investigated on the KVM virtual machine with KSM (Kernel Samepage Merging) and LBCAS (Loopback Content Addressable Storage). 1. Introduction Current OSes include many logical sharing techniques that reduce consumption of memory and storage. For example, dynamic-link shared library is a technique to share common functions and reduce memory usage. Symbolic link is a technique to share files and reduce storage usage. These techniques are useful, but they require dynamic management and cause some problems. A dynamic-link shared library of ELF format has a security issue called search path replacement attack [7] and GOT (Global Offset Table) overwrite attack [7,14]. A symbolic link has version mismatch problem called Dependency Hell. Parts of the problems can be solved by a static-link and substantial copy. Unfortunately, this approach requires source code, and many applications deeply depend on logical sharing. Furthermore, this solution increases memory and storage usage. On the other hand, memory and storage virtualization is now advanced, and physical sharing, called deduplication, has become popular. Deduplication is a technique to share same-content chunks (chunk is a unit of continuous data in a block image) at low level, reducing the total usage. Memory deduplication [1,3,9,12,19] is mainly used to reduce same-content memory pages among virtual machine instances. Storage deduplication [5,13,15,18,20] is mainly used by CAS (Content addressable Storage), which reduces same-content chunks among software versions. The technologies have become popular in cloud computing, because they reduce the consumption of physical resources. The user does not need to care about redundancy of memory and storage. This paper proposes the possibility of the replacement of logical sharing (dynamic-link shared library and symbolic link) by “self-contained” binaries which include dynamic-link shared libraries. The memory and storage overheads are mitigated by physical sharing (memory and storage deduplication). It shows the feasibility of deduplication on a single OS image to improve security. This paper is organized as follows. Section 2 reviews deduplication for memory and storage. Section 3 introduces issues of logical sharing. Section 4 describes the method to replace logical sharing with self-contained binaries. Section 5 evaluates the current implementation. Section 6 discusses future work, and Section 7 summarizes our conclusions. 2. Deduplication 2.1 Memory Deduplication Memory deduplication is mainly used on a virtual machine monitor. The memory images of virtual machine instances include many same-content pages, especially when same guest OS runs on several virtual machines. The same-content pages can be merged on physical memory. There are many recent implementations of memory deduplication. Early memory deduplication on a virtual machine monitor was implemented on Disco [3], which is called Transparent Page Sharing. VMWare ESX revised it as Content-Based Page Sharing [19]. Xen has two major implementations called Differential Engine [9] and Satori [12]. KVM uses KSM (Kernel Samepage Merging) [1], which is a general memory deduplication to merge memory image on a single process or multiple processes. The function of KSM is included from Linux kernel 2.6.32. 2.2 Storage Deduplication Storage deduplication is mainly used by CAS (Content addressable Storage), which has become a popular method to manage disk images for many OSes. In CAS systems, data is addressed not by its physical location but by a name that is derived from the content of that data (a secure hash is used as a unique name usually). A CAS system can reduce its total volume by deduplication, which aggregates same-content chunks with a unique name. CAS systems are divided into two categories, fixed- or variable-length chunk. Venti [13], CASPER [18] and LBCAS [15] use fixed-length chunk. DeepStore [20] and NEC-Hydra[5] use variable-length chunk. From the view of management overhead, fixed alignment is easy to treat. The effects of deduplication on several operating systems were evaluated in [10, 11]. Liguori [11] reported that Linux distributions (Fedora, Ubuntu, and OpenSUSE) had many same-content chunks and 10% of the image was deduplicated among the Linux distributions. Jin [10] reported that the effect of deduplication on a single OS image was not large except for zero-cleared chunks. 3. Drawback of Logical Sharing 3.1 Problems of Memory Sharing Dynamic-link shared library is a popular technique to reduce memory usage, but it has security and maintenance problems. Dynamic-link has a problem called “search path replacement attack”. Dynamic-link searches a shared library at run time using a search path. Search path is defined by an environment variable, for example, “LD_LIBRARY_PATH” environment variable on Linux. It is convenient because shared library is replaced for each process. However, if a search path is replaced for malicious library, a malware is loaded easily, because caller program has no methods to certify libraries. ELF format for dynamic-link has a problem called “GOT (Global Offset Table) overwrite attack” [7,14]. The GOT redirects position-independent address calculations to an absolute location and is located in the .got section of an ELF executable or shared object. It stores the final (absolute) location of a function call (symbol), used in dynamically linked code. GOT is mapped to the data segment and easily overwritten by malware. A dynamic-link shared library has a management problem called Dependency Hell (DLL Hell in Windows). A partial change of a library makes it incompatible with programs that were built against an earlier version. Windows has been particularly vulnerable to this, because of its emphasis on dynamic linking of C++ libraries and OLE (Object Linking and Embedding) objects. The same problem has occurred on Linux and MacOS. A dynamic-link shared library also incurs performance problems. The dynamic binding of the indirect table takes much time. It was reported that the boot time of KDE was dominated by its dynamic binding and that half of the time of booting KDE was wasted. To solve the problem, some techniques have been proposed (i.e., prelink on Linux, prebinding on MacOS, etc.) but they are not widely used. 3.2 Problems of Storage Sharing Symbolic links are a popular technique to reduce storage usage, but they also have problems. Symbolic links can cause Dependency Hell easily, because most libraries are symbolic-linked in order to control minor updates. Although a package manager maintains the versions, symbolic links are easily replaced by hand. The adverse impact of this practice is hidden, because the caller program has no methods to certify a symbolic link. 4. From Logical Sharing to Self-Contained Binary In order to mitigate the problems of logical sharing, we thought the replacement of dynamic-links with static-links would solve the problems. Unfortunately, current applications deeply depend on dynamic-link shared libraries and they are not easy to replace with static-link. This is due to flexibility of dynamic-link shared libraries, and due to avoiding license contamination problems. For example, the libraries of GNOME assume to be dynamic-link shared libraries. Gentoo, which builds all binaries from source code, ignores the configuration for the static-link compile option. Furthermore, this approach requires source code and does not apply to commercial applications. Instead of a replacement with static-link shared libraries, we customized ELF executables with a "pseudo-static" converter. A pseudo-static converter includes dynamic-link shared libraries into an ELF executable. It aims to bring a binary onto another machine without the need to drag all its libraries. Some tools are developed for Linux; e.g., statifier[17], ermine[6], autopackage[2], etc. In this paper we use statifier, which is open source software. Statifier takes a "memory snapshot" of a process, created by the loader when the loader has ALREADY finished relocation of the dynamic-link shared library (\_ld\_start() of ld-linux.so) and BEFORE the loader invokes any INT functions. The relocation information and dynamic-link shared libraries are included into the executable ELF file as data. Statifier includes all shared libraries which include linux-gate.so, the special library for Linux system calls, and ld-linux.so, the ELF interpreter and loader. The included relocation information and shared libraries are loaded by the starter of statifier, which is also embedded in the ELF file. The Linux kernel recognizes the `statifier`ed ELF binary as a static-link, because statifier includes the ELF interpreter (ld-linux.so) and there is no INTERP segment to call it. The ldd command shows no dynamic-link shared libraries in a `statifier`ed file. The resulting self-contained ELF binary prevents search path replacement attack and Dependency Hell, because the shared libraries are included. Although GOT still exists in a `statifier`ed ELF file, but statifier mitigates GOT overwrite attacks, because the address prefixed and detects falsification with the relocation information in the ELF file. The memory and storage overhead caused by statifier are mitigated by memory and storage deduplication, respectively. ## 5. Performance evaluation This section describes the performance of self-contained binaries (statifier Linux) on deduplication, compared with normal Linux. We estimated the effect on two Linux distributions (Debian and Gentoo) and confirmed the same results. This section shows the result on Gentoo. Gentoo (kernel 2.6.31) was installed on a 32GB virtual disk (31GB ext3, 1GB swap) as a guest OS on the KVM virtual machine. The ELF binaries under /bin (82 files), /sbin (74), /usr/bin (912), /usr/sbin (94) were customized by statifier. The two virtual disks (original and statifier) were translated to LBCAS (LoopBack Content Addressable Storage) [15] which offered fixed-size chunk deduplication. KSM of Linux was used to investigate the effect of memory deduplication. KVM ran on Ubuntu 9.10 (kernel: vanilla-2.6.32.1) with 768MB memory for a virtual machine. KVM allowed using snapshot mode because LBCAS was configured as a read-only virtual device. ### 5.1 Effect of Dynamic-link Shared Library At the beginning, we estimated the logical sharing effect of a dynamic-link shared library on a normal Gentoo installation. Memory usage was measured by exmap [8]. At the end of the boot stage, 42 processes were running. All of them used linux-gate.so, ld-linux.so, and libc.so. Exmap showed the usage of virtual memory and real memory. The total virtual memory used by all processes was 127,928KB. The memory used for unique data (namely, except shared libraries) was 41,744KB. The real used memory was 54,760KB. From the results we confirmed that shared libraries used 13,016 (54,760–41,744) KB of real memory. The shared libraries were expanded on virtual memory and used 86,184 (127,928–41,744) KB. The effect of dynamic-link shared library was 6.62 (86,184/13,016). The result means that code of shared libraries overlapped 6.62 times in average. We confirmed that dynamic-link shared library reduced memory usage. The effect has to be transformed to deduplication. ### 5.2 Expansion by Statifier In this section we show the impact of statifier on Gentoo. Statifier includes shared libraries into each binary and increases the volume of storage. Fortunately, the increase is the view of the guest OS, and it is mitigated by deduplication on a real storage. In our experiments statifier transformed 1,162 ELF binary files in `/bin`, `/sbin`, `/usr/bin`, and `/usr/sbin` to self-contained binary files. Table 1 shows the increased caused by statifier on ELF files. The total volume of the original ELF binaries was 87.87MB. It was increased to 3572.9MB (40.66 times) by statifier, because ELF binaries included the all necessary dynamic-link shared libraries. The average of a binary file was changed from 75.6KB to 3,074KB (40.66 times). The biggest change was `/usr/bin/gnome-open` which included 6 libraries (linux-gate.so, ld-linux.so, libc.so, libm.so, libgcc_s.so, and libstdc++). Statifier increased from 3,426,340B to 6,094,848B (1.78 times). Normal Gentoo used 3,754MB storage. Statifier increased it to 7,075 MB, which was 1.88 times bigger than the original. ### Table 1. Increase caused by Statifier on ELF files. <table> <thead> <tr> <th>File Path</th> <th>Original</th> <th>Statifier</th> <th>Increase</th> </tr> </thead> <tbody> <tr> <td>Total</td> <td>87,865,480</td> <td>3,572,936,704</td> <td>40.66</td> </tr> <tr> <td>Average</td> <td>75,615</td> <td>3,074,816</td> <td>40.66</td> </tr> <tr> <td>Max</td> <td>5,400</td> <td>8,732,672</td> <td>1617.16</td> </tr> <tr> <td>Min</td> <td>3,426,340</td> <td>6,094,848</td> <td>1.78</td> </tr> </tbody> </table> ### 5.3 Statifier versus static linking A subset of 1,162 ELF files were re-compiled with... static linking (57 in /bin, 22 in /sbin, 76 in /usr/bin, 9 in /usr/sbin). We compared the 185 static-link ELF files with statifiered files in table 2. It shows the total of statifiered ELF files are 2.63 times bigger than static-link. The biggest difference was with bzip2recover which increased 5.99 times with 3 shared libraries. The smallest difference was with busybox which was increased 1.56 times with 7 shared libraries. In all cases, we found that statifier produced larger files than static linking. This subset was too small to impact the security and management problem due to library sharing. The comparison was available on Gentoo because all ELF files are created from source code on a client machine. It would require more effort with distributions that are managed with binary packages. Table 2. Size comparison for 164 ELF files created with statifier and with static linking. () Indicates the difference with dynamic linking. [ ] Indicates the difference between static linking and statifier. <table> <thead> <tr> <th></th> <th>Original</th> <th>Static link</th> <th>Statifier</th> </tr> </thead> <tbody> <tr> <td>Total</td> <td>8,065,092</td> <td>99,734,044</td> <td>262,627,328</td> </tr> <tr> <td>Average</td> <td>49,177</td> <td>608,134</td> <td>1,601,386</td> </tr> <tr> <td>Max</td> <td>9,512</td> <td>527,252</td> <td>3,160,672</td> </tr> <tr> <td>Min</td> <td>896,076</td> <td>1,682,984</td> <td>2,629,632</td> </tr> </tbody> </table> 5.4 Result of Memory Deduplication We investigated the memory usage on normal and statifier Gentoo with or without KSM. Figure 1 shows the consumed 4KB memory pages. Statifier Gentoo required 2.64 (86,506/32,706) times more memory than normal Gentoo. The result was caused by redundant loading of the same shared libraries. Fortunately, redundant loading was deduplicated by KSM. The required physical memory was reduced to 34.4% (29,732 pages from 86,506), which was almost same to the normal Gentoo (30,410 pages with KSM). The small decrease might be caused by the memory management of shared library, because a self-contained shared ELF binary releases its memory when it terminates. This result means memory deduplication brings the same benefit as dynamic-link shared libraries. Interestingly, the memory of normal Gentoo was also deduplicated by KSM and reduced to 93.0% (30,410 pages from 32,706). It means that there is redundant memory on normal Gentoo. Figure 1 also shows the number of unique pages and deduplicated pages on KSM. Normal Gentoo had many unique pages (29,928), because dynamic-link shared libraries are counted as unique pages. On the other hand, statifier Gentoo has fewer unique pages (25,219) than normal. It was caused by shared libraries which were treated as deduplicated pages. Figure 2 shows the trace of memory deduplication at boot time on normal and statifier Gentoo, using either the Loopback device or LBCAS. The results show that physical memory consumption is almost the same on normal and statifier but the boot time is delayed by the overhead of deduplication. The overhead at boot time is mentioned in Section 5.6. 5.5 Result of Storage Deduplication We investigated the effect of deduplication on Gentoo with LBCAS (fixed chunk size was varied by 16KB, 64KB and 256KB). Table 3 shows the results. The left side shows the total storage size (“static”), and the right side shows the volume of required chunks at boot time (“dynamic”). The static image of statifier Gentoo was 1.88 times bigger than the normal on the loopback device. However, the ratios were reduced by LBCAS, because the same chunks were deduplicated. The smaller LBCAS chunk size showed the lower ratio of increase (1.04 at 16KB, 1.12 at 64KB, 1.18 at 256KB LBCAS) because the smaller LBCAS chunk size allowed more chunks to be deduplicated than the larger LBCAS chunk size. The dynamic image of statifier Gentoo was 2.25 times bigger than normal on the loopback device. The ratio was higher than the static image, because executable ELF binaries were expanded by statifier. In the case of 16KB, auto login was timed out and booting stopped at the Gnome graphical login GDK. It was caused by the overhead of 16KB LBCAS. The ratio on 64KB and 256KB was also reduced by LBACS but the effect was different from the static image. It depended on the ratio of deduplication and the number of accesses to a page. In any case the results show that the storage expansion caused by statifier can be reduced by deduplication. 5.6 Boot Time We investigated the boot time with and without KSM and LBCAS. The boot time was logged until the end of auto login. Table 4 shows the results, which indicate the overhead of KSM and LBCAS. In any case the overhead with KSM was larger than without KSM. It was caused by memory deduplication. Especially statifier cases showed a larger overhead than normal, because there were more occurrences of memory deduplication. The volume of deduplication was confirmed in Figure 2. The overhead of LBCAS was also large on statifier Gentoo. It was caused by more I/O requests than normal, because statifier expanded each binary and increased the volume of I/O. In Table 3 shows that statifier required 2.25 times more volume than normal at boot time. However the overhead was not proportional to the LBCAS chunk size, because many chunks were deduplicated and statifier required 1.29 times more than normal. In the case of using loopback without KSM, statifier was faster than normal. It was caused by the elimination of relocation time, symbol resolution time, and binary loading time by statifier, which has the same effect as prelink. In any case KSM and LBCAS caused time overhead because it was a trade-off between CPU and storage. Considering the improved security, we regard the overhead as acceptable. Table 3. Number of chunks on LBCAS (fixed chunk size was varied by 16KB, 64KB, and 256KB.). The upper row shows the volumes required by the guest OS. Left columns show the numbers of the static storage image. Right columns show the numbers of required chunks at boot time. The parenthesis indicates the ratio of statifier compared to normal. <table> <thead> <tr> <th>Static normal</th> <th>Dynamic (boot)</th> </tr> </thead> <tbody> <tr> <td>GuestOS</td> <td>Normal</td> </tr> <tr> <td>Volume 16KB</td> <td>3,754MB</td> </tr> <tr> <td></td> <td>268,454</td> </tr> <tr> <td>Volume 64KB</td> <td>74,679</td> </tr> <tr> <td></td> <td>22,806</td> </tr> <tr> <td>Volume 256KB</td> <td>22,806</td> </tr> </tbody> </table> 6. Discussion SLINKY[4] uses the same approach, but it requires a special kernel, because SLINKY does not use a virtual machine, and memory and storage deduplication are not offered. Our approach is practical on virtual machine environments, especially cloud computing. The method can be applied to other operating systems easily. Statifier is a substitution of static-link shared libraries. We could use sta.li [16] which has all static-link shared libraries and no /lib directory. However, it has many restrictions because current Linux applications deeply depend on dynamic-link shared libraries. The replacement by pseudo-static is practical and applicable to existing Linux distributions. On the other hand, there are some projects that increase the number of self-contained binaries for application migration, e.g, Google NaCl (Native Client) and VMWare ThinApps (former Thinstall) on Windows. They require more memory but deduplication will mitigate it. 7. Conclusions This paper describes the possibility of replacing logical sharing by self-contained binaries. The overhead of memory and storage is mitigated by physical sharing (memory and disk deduplication). A self-contained binary mitigates the problems which come from the dynamic management of logical sharing, such as search path replacement attack, GOT overwrite attack, and Dependency Hell. Experiments demonstrated the effect of self-contained binaries, memory deduplication (KSM), and storage deduplication (LBCAS). Self-contained binaries increased the files to 40.66 times bigger than normal, but deduplication mitigated the overhead to less than 1.4 times on memory, storage, and time. Current deduplication has been used for multiple virtual machine instances or multiple versions of storage images. However, this paper shows that deduplication is useful on a single OS image. This direction indicates a new use for deduplication. Reference [8] exmap http://www.berthels.co.uk/exmap/ ![Figure 2. Trace of memory usage on the deduplication of Linux-KSM at boot time. The upper and lower rows show the results on a normal loopback device and on LBCAS. The left and right columns show the results on Normal Gentoo and on Statifier Gentoo.](image-url)
{"Source-Url": "http://www.diva-portal.org/smash/get/diva2:1060429/FULLTEXT01.pdf", "len_cl100k_base": 5363, "olmocr-version": "0.1.51", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 20616, "total-output-tokens": 6740, "length": "2e12", "weborganizer": {"__label__adult": 0.00038552284240722656, "__label__art_design": 0.00044465065002441406, "__label__crime_law": 0.0007290840148925781, "__label__education_jobs": 0.00066375732421875, "__label__entertainment": 0.0001043081283569336, "__label__fashion_beauty": 0.00015103816986083984, "__label__finance_business": 0.0005779266357421875, "__label__food_dining": 0.00029158592224121094, "__label__games": 0.000614166259765625, "__label__hardware": 0.00307464599609375, "__label__health": 0.0004324913024902344, "__label__history": 0.0003249645233154297, "__label__home_hobbies": 0.00014209747314453125, "__label__industrial": 0.0006875991821289062, "__label__literature": 0.0002837181091308594, "__label__politics": 0.0003006458282470703, "__label__religion": 0.0003788471221923828, "__label__science_tech": 0.212890625, "__label__social_life": 0.00014579296112060547, "__label__software": 0.056396484375, "__label__software_dev": 0.72021484375, "__label__sports_fitness": 0.00019371509552001953, "__label__transportation": 0.0005002021789550781, "__label__travel": 0.00017952919006347656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25479, 0.05774]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25479, 0.30719]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25479, 0.9083]], "google_gemma-3-12b-it_contains_pii": [[0, 549, false], [549, 4757, null], [4757, 9668, null], [9668, 14383, null], [14383, 18380, null], [18380, 22658, null], [22658, 25479, null]], "google_gemma-3-12b-it_is_public_document": [[0, 549, true], [549, 4757, null], [4757, 9668, null], [9668, 14383, null], [14383, 18380, null], [18380, 22658, null], [22658, 25479, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25479, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25479, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25479, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25479, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25479, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25479, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25479, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25479, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25479, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25479, null]], "pdf_page_numbers": [[0, 549, 1], [549, 4757, 2], [4757, 9668, 3], [9668, 14383, 4], [14383, 18380, 5], [18380, 22658, 6], [22658, 25479, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25479, 0.15038]]}
olmocr_science_pdfs
2024-12-04
2024-12-04
793f0d10cc944730b5212c1a7f5298416b762758
Rcpp syntactic sugar Dirk Eddelbuettel Romain François Rcpp version 0.12.12 as of July 13, 2017 Abstract This note describes Rcpp sugar which has been introduced in version 0.8.3 of Rcpp (Eddelbuettel, François, Allaire, Ushey, Kou, Russel, Chambers, and Bates, 2017; Eddelbuettel and François, 2011). Rcpp sugar brings a higher-level of abstraction to C++ code written using the Rcpp API. Rcpp sugar is based on expression templates (Abrahams and Gurtovoy, 2004; Vandevoorde and Josuttis, 2003) and provides some ‘syntactic sugar’ facilities directly in Rcpp. This is similar to how RcppArmadillo (Eddelbuettel, François, and Bates, 2016) offers linear algebra C++ classes based on Armadillo (Sanderson, 2010). 1 Motivation Rcpp facilitates development of internal compiled code in an R package by abstracting low-level details of the R API (R Core Team, 2015) into a consistent set of C++ classes. Code written using Rcpp classes is easier to read, write and maintain, without losing performance. Consider the following code example which provides a function foo as a C++ extension to R by using the Rcpp API: ```cpp RcppExport SEXP foo(SEXP x, SEXP y) { Rcpp::NumericVector xx(x), yy(y); int n = xx.size(); Rcpp::NumericVector res(n); double x_ = 0.0, y_ = 0.0; for (int i=0; i<n; i++){ x_ = xx[i]; y_ = yy[i]; if (x_ < y_){ res[i] = x_ * x_; } else { res[i] = -(y_ * y_); } } return res; } ``` The goal of the function foo code is simple. Given two numeric vectors, we create a third one. This is typical low-level C++ code that could be written much more concisely in R thanks to vectorisation as shown in the next example. ```r > foo <- function(x, y){ + ifelse(x < y, x*x, -(y*y)) + } ``` Put succinctly, the motivation of Rcpp sugar is to bring a subset of the high-level R syntax in C++. Hence, with Rcpp sugar, the C++ version of foo now becomes: ```cpp RcppExport SEXP foo(SEXP x, SEXP y) { Rcpp::NumericVector xx(x), yy(y); ifelse(x < y, x*x, -(y*y)) ``` Apart from the fact that we need to assign the two objects we obtain from R—which is a simple statement each thanks to the template magic in Rcpp—and the need for explicit return and Rcpp::wrap statements, the code is now identical between highly-vectorised R and C++. Rcpp sugar is written using expression templates and lazy evaluation techniques (Abrahams and Gurtovoy, 2004; Vandevoorde and Josuttis, 2003). This not only allows a much nicer high-level syntax, but also makes it rather efficient (as we detail in section 4 below). 2 Operators Rcpp sugar takes advantage of C++ operator overloading. The next few sections discuss several examples. 2.1 Binary arithmetic operators Rcpp sugar defines the usual binary arithmetic operators: +, −, *, /. ``` // two numeric vectors of the same size NumericVector x; NumericVector y; // expressions involving two vectors NumericVector res = x + y; NumericVector res = x - y; NumericVector res = x * y; NumericVector res = x / y; // one vector, one single value NumericVector res = x + 2.0; NumericVector res = 2.0 - x; NumericVector res = y * 2.0; NumericVector res = 2.0 / y; // two expressions NumericVector res = x * y + y / 2.0; NumericVector res = x * (y - 2.0); NumericVector res = x / (y * y); ``` The left hand side (lhs) and the right hand side (rhs) of each binary arithmetic expression must be of the same type (for example they should be both numeric expressions). The lhs and the rhs can either have the same size or one of them could be a primitive value of the appropriate type, for example adding a NumericVector and a double. 2.2 Binary logical operators Binary logical operators create a logical sugar expression from either two sugar expressions of the same type or one sugar expression and a primitive value of the associated type. // two integer vectors of the same size NumericVector x; NumericVector y; // expressions involving two vectors LogicalVector res = x < y; LogicalVector res = x > y; LogicalVector res = x <= y; LogicalVector res = x >= y; LogicalVector res = x == y; LogicalVector res = x != y; // one vector, one single value LogicalVector res = x < 2; LogicalVector res = 2 > x; LogicalVector res = y <= 2; LogicalVector res = 2 != y; // two expressions LogicalVector res = (x + y) < (x*x); LogicalVector res = (x + y) >= (x*x); LogicalVector res = (x + y) == (x*x); 2.3 Unary operators The unary operator- can be used to negate a (numeric) sugar expression, whereas the unary operator! negates a logical sugar expression: // a numeric vector NumericVector x; // negate x NumericVector res = -x; // use it as part of a numerical expression NumericVector res = -x * (x + 2.0); // two integer vectors of the same size NumericVector y; NumericVector z; // negate the logical expression "y < z" LogicalVector res = !(y < z); 3 Functions Rcpp sugar defines functions that closely match the behavior of R functions of the same name. 3.1 Functions producing a single logical result Given a logical sugar expression, the `all` function identifies if all the elements are `TRUE`. Similarly, the `any` function identifies if any the element is `TRUE` when given a logical sugar expression. ```r IntegerVector x = seq_len(1000); all(x*x < 3); any(x*x < 3); ``` Either call to `all` and `any` creates an object of a class that has member functions `is_true`, `is_false`, `is_na` and a conversion to SEXP operator. One important thing to highlight is that `all` is lazy. Unlike R, there is no need to fully evaluate the expression. In the example above, the result of `all` is fully resolved after evaluating only the two first indices of the expression `x * x < 3`. `any` is lazy too, so it will only need to resolve the first element of the example above. One important thing to note concern the conversion to the `bool` type. In order to respect the concept of missing values (NA) in R, expressions generated by `any` or `all` can not be converted to `bool`. Instead one must use `is_true`, `is_false` or `is_na`: ```r // wrong: will generate a compile error bool res = any(x < y); // ok bool res = is_true(any(x < y)) bool res = is_false(any(x < y)) bool res = is_na(any(x < y)) ``` 3.2 Functions producing sugar expressions 3.2.1 `is_na` Given a sugar expression of any type, `is_na` (just like the other functions in this section) produces a logical sugar expression of the same length. Each element of the result expression evaluates to `TRUE` if the corresponding input is a missing value, or `FALSE` otherwise. ```r IntegerVector x = IntegerVector::create(0, 1, NA_INTEGER, 3); is_na(x) all(is_na(x)) any(!is_na(x)) ``` 3.2.2 `seq_along` Given a sugar expression of any type, `seq_along` creates an integer sugar expression whose values go from 1 to the size of the input. ```r IntegerVector x = IntegerVector::create(0, 1, NA_INTEGER, 3); seq_along(x) seq_along(x * x * x * x * x * x * x) ``` This is the most lazy function, as it only needs to call the `size` member function of the input expression. The input expression need not to be resolved. The two examples above gives the same result with the same efficiency at runtime. The compile time will be affected by the complexity of the second expression, since the abstract syntax tree is built at compile time. 3.2.3 seq_len `seq_len` creates an integer sugar expression whose \(i^{\text{th}}\) element expands to \(i\). `seq_len` is particularly useful in conjunction with `sapply` and `lapply`. ```r // 1, 2, ..., 10 IntegerVector x = seq_len(10); lapply(seq_len(10), seq_len) ``` 3.2.4 pmin and pmax Given two sugar expressions of the same type and size, or one expression and one primitive value of the appropriate type, `pmin` (`pmax`) generates a sugar expression of the same type whose \(i^{\text{th}}\) element expands to the lowest (highest) value between the \(i^{\text{th}}\) element of the first expression and the \(i^{\text{th}}\) element of the second expression. ```r IntegerVector x = seq_len(10); pmin(x, x*x) pmin(x*x, 2) pmin(x, x*x) pmin(x*x, 2) ``` 3.2.5 ifelse Given a logical sugar expression and either: - two compatible sugar expression (same type, same size) - one sugar expression and one compatible primitive `ifelse` expands to a sugar expression whose \(i^{\text{th}}\) element is the \(i^{\text{th}}\) element of the first expression if the \(i^{\text{th}}\) element of the condition expands to `TRUE` or the \(i^{\text{th}}\) of the second expression if the \(i^{\text{th}}\) element of the condition expands to `FALSE`, or the appropriate missing value otherwise. ```r IntegerVector x; IntegerVector y; ifelse(x < y, x, (x+y)*y) ifelse(x > y, x, 2) ``` 3.2.6 sapply `sapply` applies a C++ function to each element of the given expression to create a new expression. The type of the resulting expression is deduced by the compiler from the result type of the function. The function can be a free C++ function such as the overload generated by the template function below: ```cpp template <typename T> T square( const T& x) { return x * x; } sapply( seq_len(10), square<int>); Alternatively, the function can be a functor whose type has a nested type called `result_type` ```} ```cpp template <typename T> struct square : std::unary_function<T,T> { T operator()(const T& x) { return x * x; } }; sapply( seq_len(10), square<int>()); ``` ### 3.2.7 `lapply` `lapply` is similar to `sapply` except that the result is always a list expression (an expression of type `VECSXP`). ### 3.2.8 `sign` Given a numeric or integer expression, `sign` expands to an expression whose values are one of 1, 0, -1 or `NA`, depending on the sign of the input expression. ```r IntegerVector xx; sign( xx ) sign( xx * xx ) ``` ### 3.2.9 `diff` The \( i \)-th element of the result of `diff` is the difference between the \((i+1)\)-th and the \(i\)-th element of the input expression. Supported types are integer and numeric. ```r IntegerVector xx; diff( xx ) ``` ### 3.3 Mathematical functions For the following set of functions, generally speaking, the \( i \)-th element of the result of the given function (say, `abs`) is the result of applying that function to this \( i \)-th element of the input expression. Supported types are integer and numeric. 3.4 The d/q/p/r statistical functions The framework provided by Rcpp sugar also permits easy and efficient access the density, distribution function, quantile and random number generation functions by R in the Rmath library. Currently, most of these functions are vectorised for the first element which denote size. Consequently, these calls works in C++ just as they would in R: ```cpp IntegerVector x; abs( x ) exp( x ) floor( x ) ceil( x ) pow(x, z) // x to the power of z ``` Similar d/q/p/r functions are provided for the most common distributions: beta, binom, cauchy, chisq, exp, f, gamma, geom, hyper, lnorm, logis, nbinom, nbinom_mu, nchisq, nf, norm, rt, pois, t, unif, and weibull. Note that the parameterization used in these sugar functions may differ between the top-level functions exposed in an R session. For example, the internal rexp is parameterized by scale, whereas the R-level stats::rexp is parameterized by rate. Consult Distribution Functions for more details on the parameterization used for these sugar functions. One point to note is that the programmer using these functions needs to initialize the state of the random number generator as detailed in Section 6.3 of the 'Writing R Extensions' manual (R Core Team, 2015). A nice C++ solution for this is to use a scoped class that sets the random number generator on entry to a block and resets it on exit. We offer the RNGScope class which allows code such as ```cpp RcppExport SEXP getRGamma() { RNGScope scope; NumericVector x = rgamma( 10, 1, 1 ); return x; } ``` As there is some computational overhead involved in using RNGScope, we are not wrapping it around each inner function. Rather, the user of these functions (i.e. you) should place an RNGScope at the appropriate level of your code. 4 Performance TBD 5 Implementation This section details some of the techniques used in the implementation of Rcpp sugar. Note that the user need not to be familiar with the implementation details in order to use Rcpp sugar, so this section can be skipped upon a first read of the paper. Writing Rcpp sugar functions is fairly repetitive and follows a well-structured pattern. So once the basic concepts are mastered (which may take time given the inherent complexities in template programming), it should be possible to extend the set of function further following the established pattern. 5.1 The curiously recurring template pattern Expression templates such as those used by Rcpp sugar use a technique called the Curiously Recurring Template Pattern (CRTP). The general form of CRTP is: ```cpp // The Curiously Recurring Template Pattern (CRTP) template <typename T> struct base { // ... }; struct derived : base<derived> { // ... }; ``` The base class is templated by the class that derives from it: derived. This shifts the relationship between a base class and a derived class as it allows the base class to access methods of the derived class. 5.2 The VectorBase class The CRTP is used as the basis for Rcpp sugar with the VectorBase class template. All sugar expression derive from one class generated by the VectorBase template. The current definition of VectorBase is given here: ```cpp template <int RTYPE, bool na, typename VECTOR> class VectorBase { public: struct r_type : traits::integral_constant<int,RTYPE>{}; struct can_have_na : traits::integral_constant<bool,na>{}; typedef typename traits::storage_type<RTYPE>::type stored_type; VECTOR& get_ref() { return static_cast<VECTOR&>(*this); } inline stored_type operator[](int i) const { return static_cast<const VECTOR*>(this)->operator[](i); } inline int size() const { return static_cast<const VECTOR*>(this)->size(); } }/* definition ommited here */ class iterator; iterator begin() const { return iterator(*this, 0); } iterator end() const { return iterator(*this, size()); } } ``` The VectorBase template has three parameters: - RTYPE: This controls the type of expression (INTSXP, REALSXP, ...) - na: This embeds in the derived type information about whether instances may contain missing values. Rcpp vector types (IntegerVector, ...) derive from VectorBase with this parameter set to true because there is no way to know at compile-time if the vector will contain missing values at run-time. However, this parameter is set to false for types that are generated by sugar expressions as these are guaranteed to produce expressions that are without missing values. An example is the is_na function. This parameter is used in several places as part of the compile time dispatch to limit the occurrence of redundant operations. - VECTOR: This parameter is the key of Rcpp sugar. This is the manifestation of CRTP. The indexing operator and the size method of VectorBase use a static cast of this to the VECTOR type to forward calls to the actual method of the derived class. 5.3 Example: sapply As an example, the current implementation of sapply, supported by the template class Rcpp::sugar::Sapply is given below: ```cpp template <int RTYPE, bool NA, typename T, typename Function> class Sapply : public VectorBase< Rcpp::traits::r_sexptype_traits<typename ::Rcpp::traits::result_of<Function>::type> >: VEC, VEC::converter_type, typename Rcpp::traits::storage_type<RESULT_R_TYPE>::type STORAGE ; Sapply( const VEC& vec_, Function fun_ ) : vec(vec_), fun(fun_){} inline STORAGE operator[]( int i ) const { return converter_type::get( fun( vec[i] ) ); } inline int size() const { return vec.size(); } } private: const VEC& vec ; Function fun ; }; // sugar template <int RTYPE, bool _NA_, typename T, typename Function > inline sugar::Sapply<RTYPE, _NA_, T, Function> sapply( const Rcpp::VectorBase<RTYPE, _NA_, T>& t, Function fun ){ return sugar::Sapply<RTYPE, _NA_, T, Function>( t, fun ); } ``` 5.3.1 The sapply function sapply is a template function that takes two arguments. • The first argument is a sugar expression, which we recognize because of the relationship with the VectorBase class template. • The second argument is the function to apply. The sapply function itself does not do anything, it is just used to trigger compiler detection of the template parameters that will be used in the sugar::Sapply template. ### 5.3.2 Detection of return type of the function In order to decide which kind of expression is built, the Sapply template class queries the template argument via the Rcpp::traits::result_of template. ```cpp typedef typename ::Rcpp::traits::result_of<Function>::type result_type ; ``` The `result_of` type trait is implemented as such: ```cpp template <typename T> struct result_of{ typedef typename T::result_type type ; } ; template <typename RESULT_TYPE, typename INPUT_TYPE> struct result_of< RESULT_TYPE (*)(INPUT_TYPE) >{ typedef RESULT_TYPE type ; } ; ``` The generic definition of `result_of` targets functors with a nested `result_type` type. The second definition is a partial specialization targeting function pointers. ### 5.3.3 Identification of expression type Based on the result type of the function, the `r_sexptype_traits` trait is used to identify the expression type. ```cpp const static int RESULT_R_TYPE = Rcpp::traits::r_sexptype_traits<result_type>::rtype ; ``` ### 5.3.4 Converter The r_vector_element_converter class is used to convert an object of the function's result type to the actual storage type suitable for the sugar expression. ```cpp typedef typename Rcpp::traits::r_vector_element_converter<RESULT_R_TYPE>::type converter_type ; ``` ### 5.3.5 Storage type The `storage_type` trait is used to get access to the storage type associated with a sugar expression type. For example, the storage type of a REALSXP expression is double. ```cpp typedef typename Rcpp::traits::storage_type<RESULT_R_TYPE>::type STORAGE ; ``` 5.3.6 Input expression base type The input expression — the expression over which `sapply` runs — is also typedef’ed for convenience: ```cpp typedef Rcpp::VectorBase<RTYPE,NA,T> VEC ; ``` 5.3.7 Output expression base type In order to be part of the Rcpp sugar system, the type generated by the `Sapply` class template must inherit from `VectorBase`. ```cpp template <int RTYPE, bool NA, typename T, typename Function> class Sapply : public VectorBase< Rcpp::traits::r_sexptype_traits< typename ::Rcpp::traits::result_of<Function>::type >::rtype , true , Sapply<RTYPE,NA,T,Function> > ``` The expression built by `Sapply` depends on the result type of the function, may contain missing values, and the third argument is the manifestation of the CRTP. 5.3.8 Constructor The constructor of the `Sapply` class template is straightforward, it simply consists of holding the reference to the input expression and the function. ```cpp Sapply( const VEC& vec_, Function fun_ ) : vec(vec_), fun(fun_){} ``` ```cpp private: const VEC& vec ; Function fun ; ``` 5.3.9 Implementation The indexing operator and the `size` member function is what the `VectorBase` expects. The size of the result expression is the same as the size of the input expression and the \( i \)th element of the result is simply retrieved by applying the function and the converter. Both these methods are inline to maximize performance: ```cpp inline STORAGE operator[]( int i ) const { return converter_type::get( fun( vec[i] ) ); } inline int size() const { return vec.size() ; } ``` 6 Summary TBD References
{"Source-Url": "http://cran.cnr.berkeley.edu/web/packages/Rcpp/vignettes/Rcpp-sugar.pdf", "len_cl100k_base": 5082, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 27459, "total-output-tokens": 6348, "length": "2e12", "weborganizer": {"__label__adult": 0.00029778480529785156, "__label__art_design": 0.00023674964904785156, "__label__crime_law": 0.00024271011352539065, "__label__education_jobs": 0.00025177001953125, "__label__entertainment": 5.030632019042969e-05, "__label__fashion_beauty": 9.942054748535156e-05, "__label__finance_business": 0.0001386404037475586, "__label__food_dining": 0.00035643577575683594, "__label__games": 0.0003688335418701172, "__label__hardware": 0.0006117820739746094, "__label__health": 0.0002818107604980469, "__label__history": 0.00014698505401611328, "__label__home_hobbies": 7.092952728271484e-05, "__label__industrial": 0.0003230571746826172, "__label__literature": 0.00011783838272094728, "__label__politics": 0.00019466876983642575, "__label__religion": 0.0003294944763183594, "__label__science_tech": 0.00634002685546875, "__label__social_life": 7.110834121704102e-05, "__label__software": 0.0045318603515625, "__label__software_dev": 0.984375, "__label__sports_fitness": 0.0002570152282714844, "__label__transportation": 0.0003261566162109375, "__label__travel": 0.00017631053924560547}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21226, 0.01734]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21226, 0.3473]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21226, 0.70285]], "google_gemma-3-12b-it_contains_pii": [[0, 2086, false], [2086, 3898, null], [3898, 5021, null], [5021, 7242, null], [7242, 9083, null], [9083, 10374, null], [10374, 12768, null], [12768, 14966, null], [14966, 16434, null], [16434, 18364, null], [18364, 19977, null], [19977, 21226, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2086, true], [2086, 3898, null], [3898, 5021, null], [5021, 7242, null], [7242, 9083, null], [9083, 10374, null], [10374, 12768, null], [12768, 14966, null], [14966, 16434, null], [16434, 18364, null], [18364, 19977, null], [19977, 21226, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 21226, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21226, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21226, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21226, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21226, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21226, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21226, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21226, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21226, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21226, null]], "pdf_page_numbers": [[0, 2086, 1], [2086, 3898, 2], [3898, 5021, 3], [5021, 7242, 4], [7242, 9083, 5], [9083, 10374, 6], [10374, 12768, 7], [12768, 14966, 8], [14966, 16434, 9], [16434, 18364, 10], [18364, 19977, 11], [19977, 21226, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21226, 0.0]]}
olmocr_science_pdfs
2024-12-02
2024-12-02
2849e36b89c7345c4a899a396e1890d5ddd7e6c5
ABSTRACT The representation of GUIs as documents is a technological trend that has been present for some years, but is only now about to significantly change the way in which most user interfaces are developed. This paper examines this change, explains the reasons behind it and the concepts involved. It compares the old fashioned way of programming user in- terfaces as code units with the document-based paradigm, explaining why the latter is preferable. Furthermore, it dis- cusses how the document-based paradigm can be extended to a very comprehensive and well defined customization ap- proach for GUIs, the document-oriented approach, which supports the paradigms of end-user development and robust content. Author Keywords GUI, document orientation, end-user development, WYSI- WYG. ACM Classification Keywords H5.2 Information interfaces and presentation (e.g., HCI): User Interfaces [GUI]. INTRODUCTION Within the last eight years, different markup languages for the description of graphical user interfaces have emerged [2, 11, 13]. These languages are usually based on the XML for- mat and offer the ability to describe WIMP-style graphical user interfaces as we know them from common desktop oper- ating systems. GUI definitions in these languages are tex- tual XML documents, which are interpreted and visualized by a GUI rendering system and linked to a program logic. This approach is different to the traditional approach, which represents GUIs in the program code of an application. It is more similar to the approach used for the user interface of web applications, but web applications usually do not of- fer the richness and interaction of stand-alone GUI applica- tions. With the proliferation of document-based user inter- faces, however, this is likely to change. They have the poten- tial to bridge the gap between the flexible web applications and their stand-alone counterparts. Despite the fact that these markup languages, like XUL for instance, have been around now for several years, the document- based GUI approach is scarcely used. There exist several technological implementations, but until now they have gen- erally failed to attract the attention of software developers. But this is going to change. For good reasons many compa- nies and organizations have already committed themselves to document-based GUIs in some way or other, and the re- maining question is which of the many GUI markup lan- guages will first gain general acceptance. In order to understand the importance of that change, it is necessary to look at the different user interface technolo- gies. However, the objective of this paper is not so much to discuss the technological side but to shed light on the con- cepts that those technologies implement. These concepts are of academic interest as they have a considerable impact on usability, robustness of UIs and the feasibility of end-user development. In addition to providing an analysis of existing technolo- gies and concepts, this paper presents a conceptual contributio which we call the document-oriented GUI paradigm. We suggest a document approach with access control in which the same WYSIWIG system is used for editing and render- ing of GUIs in a framework. The ramifications of this ap- proach include not only a simplified technological design, but also advanced customizability and guarantees about the robustness of a GUI. In the following sections we will therefore present and com- pare four paradigms: code-based GUIs, GUI-oriented docu- ments, document-based GUIs and finally document-oriented GUIs. In the end we sum up our conclusions. CODE-BASED DESCRIPTION OF GUIS Traditionally, GUIs are represented in code units not differ- ent in principle from the rest of an application’s executable code. The GUI is described in the form of program state- ments that create GUI controls, set their properties, link them together etc. which makes it a requirement for people deal- ing with this representation to have programming skills. It can be even worse: the code for the GUI can be arbitrarily intermingled with the other program code, thus making both much harder to understand, change and maintain. Program code is generally Turing-complete, which means that it is possible to describe GUIs in an arbitrarily sophisticated manner. It is possible to describe a GUI by means of a complicated algorithm, e.g. for optimization purposes, even though clarity will suffer. In general, it is impossible to analyze code-based descriptions of GUIs statically, e.g. just by looking at the program code. To mitigate the necessity of programming in the process of GUI creation, there exist many visual tools for editing GUIs. These tools usually let a user compose a GUI in a more or less WYSIWYG-style and then generate program code for that GUI which can be integrated into the application being developed. In Fig. 1, for example, we see the Visual Studio IDE showing a GUI design on the left and the corresponding generated code on the right side. ![Visual GUI design and corresponding generated C# program code in the Visual Studio IDE.](image) **GUI-ORIENTED DOCUMENTS** Some technologies deal with the relationship between documents and GUIs from a different direction: instead of trying to improve traditional code-based GUIs, they start with the traditional notion of documents. This notion understands documents as compositions of static elements which represent information, but do not allow any input from the user. The strategy of such technologies is to enrich documents with GUI elements, thus giving them basic capabilities for user interaction. However, such technologies usually do not reach the richness and level of interaction of real GUIs, as the concept is based on and restricted by the notion of a static document. Documents are merely used as makeshift GUIs and cannot satisfy the needs of professional GUI design as, for example, outlined in [3]. In the following sections we will examine typical technologies of this kind. **HTML** HTML started out as a document format; its original use was the publication of a phone directory on the network of the CERN research laboratory. It is in its structure very much oriented at static textual documents. HTML already supported the concept of navigation through hyperlinks, but was lacking other types of controls. Instead, HTML supported since long the representation of different static content, like images or mathematical formulas. Only as the WWW became more and more popular, was HTML extended by basic GUI-like controls like buttons and forms. This makes HTML a typical example of a language for documents in which GUI elements can be embedded, although they do not really integrate in a natural manner. A good example illustrating the GUI-oriented characteristics of HTML are Wikis [1]. A Wiki is essentially a storage for static documents, but its functionality also supports management, retrieval, creation and modification of documents. This functionality makes use of the GUI-like elements that were introduced into HTML, but fails to achieve the degree of interaction and graphics-orientation of a real GUI. The editor for documents in a Wiki is usually text-oriented and the graphical appearance of a document can only be modified by using a specialized document language. This is a sharp contrast to the intuitiveness and clarity of WYSIWIG editors, as they can be realized with real GUIs, and hampers end-user development [12]. One of our priorities in this paper is the unification between editor and viewer. The Wikis address this issue rather on the access rights level, by advocating the right to change to be granted to everybody with the right to read. In most implementations, however, the edit view has to be explicitly entered, and is totally different in presentation as well as in logical structure: the edit view does not contain the actual screen objects that are changed, leading to robustness issues [4]. In contrast, we will propose an approach that not only supports different access rights, but makes them the only difference between an editor and a viewer: the viewer is identical to the editor started in read-only mode. There is a connection to Web 2.0 approaches that try to overcome the read/edit dichotomy in Wikis. Such Wiki-like collaborative work approaches [7] provide solutions for global editability of documents with access control, but do not address the shortcomings that such documents have regarding their capability to describe full-blown GUIs. The limitations of HTML as the main language of the web lead in turn to limitations of today’s web-technology based clients. With the new alternatives of real GUI languages coming up, they are merely embarking on a lucky chance in today’s technology landscape. The novelty of the WWW, with its ability to display graphics and navigate through hyperlinks, has long worn out, and description languages for real GUIs could very well lead to new alternatives. Having a GUI instead of merely a static document is not a loss because traditional documents are, in electronic form, displayed in GUIs. Consequently, the concept of a GUI encompasses that of a traditional document. **Office Applications** The primary domain of office applications like MS Word or OpenOffice Writer is the creation of printable documents. Hence, such applications are usually based on a traditional static document model that is not entirely suitable for describing GUIs. Such a model is usually page based, positioning of elements is oriented at the text flow, and it is required to set fixed dimensions for the document elements. Nevertheless, in particular the widespread use of paper forms has inspired the addition of GUI elements for data input that can be used within the documents. These features have led to a new use case for text processors and their documents. In the new use case, one person has the role of developing a form and then sends it to persons who have the role of filling out only the created form fields, but electronically. Then they send it to the person whose role is to process the form. This approach has very important differences to traditional GUIs. - This technology can be used in organizations as a way for end-users to produce a substitute for enterprise applications. Enterprise applications can offer this as an alternative pathway for data input: even if the enterprise application has an online form, it also offers the possibility to check out the form as an office document and fill it out offline. - Unlike in HTML, creation of new forms is naturally done in a WYSIWIG fashion, which makes end-user development much easier and more widespread. - Unlike with systems that use web forms, the users can create drafts and store them, and keep copies of their submitted forms. This is a notorious drawback of web browsers; they do not save filled out forms correctly. - The limited capabilities of these applications are not suitable for the development of applications based on the observer pattern, since they do not support push-behavior. If we look at most of the current advanced document formats, text documents as well as spreadsheets, and the accompanying tools, we see that many of them support browser-like active behaviour. This shows that, in principle, user interfaces can be built with them. But this is more like a quick solution as it may compromise the look and feel of the resulting applications. THE DOCUMENT-BASED APPROACH As we have mentioned, the concept of a GUI is more powerful than that of a traditional document. The traditional code-based approach for representing GUIs has, however, certain drawbacks. In this section we want to discuss the idea of using documents in languages that are tailored to the domain of GUIs, and the existing technologies based on this idea. The motivation is to combine the expressiveness of GUIs with the advantages that are implied in the use of documents. The result is an equivalence between GUIs and documents. Talking about GUIs implies that there must also be program logic eventually. A GUI without program logic does not serve any real purpose. Consequently, documents that describe GUIs are only part of a system that uses such technology. This is illustrated in Fig. 2. The GUI on the left side is described by a document, the program logic on the right side is given as program code. Controls of the GUI are connected to the program logic by events. Events are usually actions performed by a user on the GUI, e.g. clicking a button. The GUI document does usually not contain the program logic itself, but just information about which part of the program logic should be invoked for each event. The parts of the program logic that are invoked by the GUI are called event handlers. The transition from code-based to document-based GUIs has also been described, for example, in [2]. In the following sections we will look at two of the most promising document technologies for the description of GUIs. We will discuss their potential to change the way in which software is used and developed, and eventually explain the advantages offered by document-based GUIs in general. Mozilla XUL XUL stands for XML User Interface Language and was developed by the Mozilla Foundation, which also developed the Firefox browser and other web-related desktop applications. XUL is primarily used for the GUIs of the Mozilla applications, but since these applications consequently include a rendering system for XUL GUIs, they are also suitable for rendering other GUIs. This is most significant for the Firefox browser, as a browser’s main job is to provide a user interface to the resources of the web. As we have discussed, the documents of the web are mainly GUI-oriented, but not fully GUI-capable, and the ability to render XUL documents with the same ease as HTML documents closes this gap. As a result, Mozilla offers a browser-centric approach to GUIs that allows for full-blown GUI applications over the web. The most prominent example of a web-enabled XUL GUI is the Mozilla Amazon Browser. In a suitable Browser, this application can be started by simply opening a URL and offers a GUI to the Amazon online shopping system. The usability and look and feel is exactly that of a stand-alone GUI, while access is analogous to opening a HTML page. Figure 3 shows a screenshot of the GUI on the left side, and a screenshot of the normal web UI on the right. Both UIs run in different tabs of the same browser window. While it is impressive to see the Mozilla browser switch with ease between these two UI paradigms, there are unfortunately comparatively few applications on the web that use XUL. Although Mozilla’s browser holds a good market share, most users use the MS Internet Explorer which does not support XUL, of course. So it is not really astonishing that there seems to be hardly any other real-world example of a XUL application like the Mozilla Amazon browser. One has to say that XUL, despite its nearly eight years of existence, has failed to gain significant popularity as yet. **Microsoft XAML** XAML stands for Extensible Application Markup Language and is, like XUL, an XML-based language for the description of full-blown GUIs that was designed by Microsoft for the new Vista version of its Windows operating system [14]. In contrast to XUL, which is mainly used in the context of the Mozilla browser, XAML will be processed by a part of the operating system, the Windows Presentation Foundation (WPF). Therefore this can be called an operating-system-centered UI approach. XAML has a good chance to become the first widespread document-based approach to GUIs. And what this could mean with regard to other technologies, in particular those of the Internet, can be just what we already see from the few examples like the Mozilla Amazon Browser in Fig. 3. XAML plus the safe execution environment of the .NET platform is likely to produce more and more web applications with real GUIs. Right now, web developers need to use a mix of several technologies in order to produce web sites that mimic the look and feel of real GUIs. Usually this involves a lot of technical details and programming skills. A widespread document-based GUI technology like XAML with the possibility to connect program logic in an easy and secure manner could alleviate these requirements and give end-user development of GUI web applications a boost. As a consequence, the superior possibilities of real GUIs can leverage better usability. Potentially, the distinction between web sites and GUI applications will blur, and there will just be GUI applications which can be loaded from the net and run either online or offline. **Advantages of the Document-based Approach** The following sections discuss the advantages of document-based GUIs over the traditional code-based ones, which have already been described in a previous section. The paradigm of GUI-oriented documents, as outlined in the section before, bears some similarity with the document-based one and consequently shares some of its advantages. But, as already mentioned, this approach does not support the creation of professional full-blown GUIs. **Separation of Concerns** Separation of concerns [9] is a very important principle of systems design. It means that solutions addressing different requirements in a system should be separate during development, thus keeping its structural clarity intact and facilitating development. One common instance of such a separation is the separation of user interface and program logic, as it is illustrated in Fig. 2. This kind of separation is very common, and also present in other UI modeling approaches like, for example, the form-oriented analysis approach [5]. Document-based GUIs encourage or even enforce such separation because the GUI is given in a document language and this language is tailored to the description of GUIs. If there is support for program logic in such a document language, then it is a marginal feature, and program code cannot be arbitrarily mingled with the GUI description but has to follow its structure. Furthermore, there is a clear notion for connecting a GUI to program logic, i.e., the interface between GUI and program logic is well-defined. All this helps to ensure that GUI designers and programmers can work on their respective parts of the system without interfering with each other. Another aspect of separation of GUI and program logic is the possibility to easily have multiple different GUIs for the same application. This makes it possible to have different GUIs for different kinds of users, e.g., special GUIs for users with disabilities or GUIs in different languages. Consequently, this approach inherently offers solutions for accessibility and internationalization. **Compatibility** Code-based descriptions of GUIs depend much more on the technical specifics of their particular presentation environment than document-based ones. Different kinds of program code, i.e., in different programming languages, for different hardware, operating systems, and software components, use different execution mechanisms, data formats, linking methods and external code libraries. Code-based GUI descriptions are essentially program code, so an execution platform has to be compatible with the GUI description in all these aspects in order to be able to render the GUI. Cross-platform GUI libraries like GTK+ and abstract-machine-based language platforms like Java or .NET can mitigate this problem, but not completely eliminate it. This is because it is founded in the generally higher complexity of program code compared to a domain-specific document-based GUI description language. GUI documents have to be interpreted in some manner anyway, and the higher-level representation allows an interpreter to deal with the document in a more flexible and error tolerant manner. Also, many document-based GUI description languages are suitable for single authoring [10]. A code-based GUI, on the other side, is generally much more fragile and will not execute properly if even minor technical details do not fit. **Small Footprint and Isolation** With regard to system resources and footprint a document-based GUI is similar to an HTML document on the web. An application with a code-based GUI usually requires an installation on each machine the application is used on, and the installation often requires more access rights than a regular user has. Such an installation takes time and bears safety and security risks, as it may potentially render a system dysfunctional. In contrast to that, applications with document-based GUIs do mostly work without installation and are very easy to access, as the example of the Mozilla Amazon Browser shows. As documents, such GUIs are in general self-contained and portable. The GUI document serves as a central access point to the whole application and can also be accessed over the Internet. Program logic can be loaded on demand. All this facilitates the development of lightweight applications. Because GUI documents do not run by themselves but have to be processed by a separate rendering system, this rendering system can be designed in a way that guarantees that multiple GUIs are isolated from each other. This is important to prevent a misbehaving GUI from interfering with another. The Mozilla Amazon Browser GUI, for example, is restricted to a single tab of the browser window. **Editability** Because they are documents, GUI documents are editable. This makes it possible for a user to change a GUI, which can usually be done with a simple text editor. This can be, as it was with HTML, a catalyst for end-user development. In contrast to this, code-based GUIs are hard-coded, and, once compiled, they cannot be changed without significant effort. **Non-Universality and Abstraction** GUI document languages are in a sense non-universal. Universality is not necessary for a language that is tailored to GUI, specifically if we recall the concept of separation of concerns. The GUI document language should only focus on the GUI description — not being able to do anything else in this language can avoid a lot of problems. Non-universality can be seen to be one of the most important benefits of GUI document languages. One might think at first that non-universality is a shortcoming. But one has to be aware that universality has its own disadvantages. First of all, universal programming languages allow arbitrary complexity, and shielding end-users from the complexity of program code facilitates end-user development. And there is an even stronger argument: while it is impossible in general to analyse the code of a universal language statically, reducing the capabilities of a language can make static analysis feasible and efficient. Such static analysis of a GUI, i.e. analysis before the GUI is actually executed, can detect safety and security flaws, so that faulty behaviour can be avoided beforehand. Moreover, a reduced complexity of the language makes it much easier for a rendering system to modify a GUI on the fly, e.g. for adapting it to particular layout or look-and-feel settings. Note that the fact that most GUI document languages are textual is unrelated to the question of non-universality. Program source-code is usually textual, but universal. On the other hand, a non-universal GUI document could be stored as a set of serialized binary objects. As long as an editor for GUI documents is available, this does not affect the user. The GUI document language is tailored to the domain of GUIs, so it offers a higher level of abstraction for that domain than a universal programming language, which has no such specialization. A GUI language can offer, for example, higher level constructs or shorthands for typical GUI constructions and offer a simplified interface to the developer. As a result, such abstractions facilitate end-user development as well. **THE DOCUMENT-ORIENTED APPROACH** We propose a new approach for GUIs which is compatible with the document-based one, but takes the idea of GUIs as document a step further. In this new approach GUIs are also documents. However, they are edited and displayed with the same tool. The difference between the GUI when it is created and the GUI when it is displayed for actual use lies in the access rights the user has. This approach has several implications and can be seen as a new design pattern for the development of GUIs. We call it the document-oriented approach. There is a certain analogy to the GUI functionality of some office applications. As we have discussed in the section about GUI-oriented documents, a user can create documents with simple GUIs in such applications, e.g. for entering data into forms. Then, such a document enhanced by GUI elements can be sent to other users, who usually open the document and use the GUI in it with the office application it- Our paradigm shift is not so much concerned with the actual implementation of the GUI and the editing view. The aim is rather to establish a simpler model of the user interface framework. The document-oriented paradigm offers a number of advantages over traditional code-based GUIs and also the document-based ones. These advantages are described in the following sections. Unification of GUI Development Tool and GUI Framework For current GUIs, visual editors have become standard. They can be called WYSIWYG editors, as they present the emerging GUI very similar to the actual running GUI, as can be seen in the screenshot example of Fig. 4. Yet the GUI controls used in these editors to render the drafts might be different to the GUI framework that will be invoked at runtime. Every element of the runtime GUI is mirrored in an element of the editor that allows the editing and customization of this object – functions that are not supported by the controls as they are rendered in a running GUI. The editing process usually ends with a generation step that generates a representation of the edited GUI. The equivalence of the developer and user views has implications for the way we construct a WYSIWYG document editor as well. Usually, the GUIs as they appear when they are edited are different to the GUIs when they are used: in a WYSIWYG editor controls of the GUI usually have additional and modified functionality for changing a control’s properties. E.g. a bounding box is shown that can be dragged in order to resize a control. It is, of course, a challenge for an editor to show the controls that are edited – possibly by actually using them – but augmenting and modifying their usual behaviour to suit the purpose of the editor. In the document-oriented paradigm we would implement such editing functionality for each control by default, i.e. every control comes with the functionality that is needed to resize or position it etc. This makes the internal design of the editor much easier. Putting functionality for editing into each control makes implementation of controls more difficult, but since they are heavily reused this is well worth the effort. Robustness The document-oriented model implicitly supports the concept of robust content creation [4] and facilitates configuration of GUIs. Because a document-oriented GUI is rendered with the same software that is used for its creation, the way it is shown during usage is identical to the way it is shown during creation. This means that the WYSIWYG property of the GUI editor can be guaranteed. Even if in editing mode the GUI will indicate the additional rights by additional controls, the GUI will generally look the same, no matter if seen by the developer or user. There cannot be a mismatch between the rendering functionality of the editor and the viewer that could lead to an unviewable GUI since the conceptual identity of viewer and editor makes it impossible to edit something that cannot be viewed, or view something that cannot be edited. Simplified GUI Editor Design The equivalence of the developer and user views has implications for the way we construct a WYSIWYG document editor as well. Usually, the GUIs as they appear when they are edited are different to the GUIs when they are used: in a WYSIWYG editor controls of the GUI usually have additional and modified functionality for changing a control’s properties. E.g. a bounding box is shown that can be dragged in order to resize a control. It is, of course, a challenge for an editor to show the controls that are edited – possibly by actually using them – but augmenting and modifying their usual behaviour to suit the purpose of the editor. In the document-oriented paradigm we would implement such editing functionality for each control by default, i.e. every control comes with the functionality that is needed to resize or position it etc. This makes the internal design of the editor much easier. Putting functionality for editing into each control makes implementation of controls more difficult, but since they are heavily reused this is well worth the effort. Simplified Controls The relation of labels and text input fields illustrates most clearly the principle of using rights. In a standard GUI, labels and text input fields are distinct. In our approach, they differ only in the rights the user has. While a label is read-only, and maybe has a slightly different visual style, the text input field can be edited. Analogously, a list box where textual entries can be inserted and deleted is essentially the same as a list box with static content, only that the former permits write access to its entries. Even the full-blown WYSIWYG editor itself can be used as a sort of input field in a program, allowing a user to input a whole document, which is possible due to the recursive character of the document paradigm. As a result, the editing capabilities of the controls generalize and simplify the way in which they can be used in programs, leading to a whole range of new possibilities. Comprehensive Customization Many of the discussed technologies, like XUL and XAML, define GUI parts like windows for auxiliary dialogues in single XML files. Since these technologies use a declarative approach, we want to call such units of code GUI declaration units. Our approach offers a comprehensive customization approach. Currently, even in the new document-based implementations of user interfaces, the GUI declaration units describing the application interface are seen as a part of the application. In contrast, the document-oriented approach sees the GUI declaration units as part of the individual user documents. In the first approximation, each user document contains its own copy of the GUI declaration units for each auxiliary dialogue like a print dialogue. In current technologies for GUI declaration units like XUL and XAML, this works with by setting the prefill-values declared in these GUI declaration units. Another example is a feature to resize different panels in a window. If the GUI developer wants to grant this customization option to the user, he then grants the appropriate editing right to the end-user. Vice versa, the resizing can be blocked by withdrawing this access right. Decomposition Mechanism If one wants to have more elaborate options to configure the auxiliary dialogues, then one can employ reuse of the GUI declaration units across different user documents. Note that the difference to the document-based approach is now, that the scope of reuse is not necessary restricted to the application. The document-oriented approach works well with decomposition mechanisms like style sheets. The aim of this decomposition is multiple reuse of parts of GUI declaration units with the aim of centralized maintenance: A single change in the style sheet changes the property under question in all the documents that use this style sheet. A conceptually simpler way than style sheets is however to use an inclusion concept. We prefer to view such an inclusion as an instance of the transclusion principle [8]. Transclusion is the inclusion of a document into another document by reference. The inclusion takes place every time the user document is opened. This process is dynamic enough to enable centralized maintenance, but it is not (necessarily) supposed to deliver updates instantaneously to running applications. The recurring questions of how to deal with the conflicting goals of having a timely update of the changed information and on the other hand how to enable the user to work consistently on one document, uninterrupted by updates, is a different problem that rather belongs in the area of form-oriented interfaces [5]. The Scope of Changes Becomes Obvious The transclusion approach offers the chance to increase transparency of the user interface. In current user interfaces, there is no way for the user to tell the scope for a certain option, as the following case study shows. If in the case of two classical auxiliary dialogues, the print dialogue and the page setup in word processors, the print dialogue is often in the scope of the current application invocation, valid for all open documents, but not persistent. The page setup dialogue on the other hand is in the scope of the current document and it is stored persistently in the document. But in two office suites (MS Office and Open Office for XP) for example, there is no way for the user to tell the difference. Quite contrary, in one office suite, both auxiliary dialogues are accessible under the file menu. A further difference between both office suites is the subdialogue of the print dialogue that allows the user to choose the number of pages per sheet. In the one office suite, clicking “ok” in the subdialogue is only temporary unless the actual printout is performed. In the other office suite, “ok” in the subdialogue does make this change stick for the application session. With the transclusion principle of document-orientation we can achieve transparency for the user here without making actual changes in the look and feel. The following will hence show that document-orientation does not necessarily enforce a new dialogue structure; it only adds enlightening information. In the discussed case, the sensible introduction of document-orientation would be to associate a file location with each auxiliary dialogue, and to show this file location for example in the header of the auxiliary dialogue. For the page setup, the shown location would be the current file, while for the print dialogue the location would be stored in the user profile. This way the user would have the chance to identify the scope of his action. This proposal is not tested in usability studies, but we argue that this additional information could not be detrimental. We point out that this service of document-orientation is in accordance with the demand of ISO 9241-10 regarding the suitability of applications for learning: “The user is able to obtain information on the model on which the application is based” [6]. The transclusion approach, if fully employed, would of course allow more possibilities for reuse of customizations and settings. Take again a print dialogue as an example. If a user wants to have usually the same print settings for browser and text editor, for example a printer next to the user’s office, then he chooses the same document as print dialogue for both applications. However if the user needs often different print properties for the slide presenter, for example a colour printer, then the user can use a different print dialogue document for the slide editor. CONCLUSION We described the traditional code-based approach and the newer document-based approach for the description of GUIs and explained why the latter one will most likely change the way we deal with GUIs. We also presented the document-oriented approach, which goes beyond the document-based one. This novel approach offers a range of new advantages, like a comprehensive and exhaustive concept of GUI customization, robustness and new possibilities in the way controls can be used. The research opens the path for interesting further framework development as well as empirical studies. REFERENCES
{"Source-Url": "https://www.cs.auckland.ac.nz/~lutteroth/publications/DraheimLutterothWeber2006-GUIsAsDocuments.pdf", "len_cl100k_base": 7071, "olmocr-version": "0.1.49", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 24117, "total-output-tokens": 8296, "length": "2e12", "weborganizer": {"__label__adult": 0.0002605915069580078, "__label__art_design": 0.0008740425109863281, "__label__crime_law": 0.00019299983978271484, "__label__education_jobs": 0.0006666183471679688, "__label__entertainment": 6.264448165893555e-05, "__label__fashion_beauty": 0.00011426210403442384, "__label__finance_business": 0.00011152029037475586, "__label__food_dining": 0.0002181529998779297, "__label__games": 0.00040221214294433594, "__label__hardware": 0.0007052421569824219, "__label__health": 0.00022327899932861328, "__label__history": 0.00020396709442138672, "__label__home_hobbies": 5.716085433959961e-05, "__label__industrial": 0.0002244710922241211, "__label__literature": 0.00023925304412841797, "__label__politics": 0.0001308917999267578, "__label__religion": 0.00032138824462890625, "__label__science_tech": 0.009429931640625, "__label__social_life": 4.756450653076172e-05, "__label__software": 0.011688232421875, "__label__software_dev": 0.97314453125, "__label__sports_fitness": 0.00016069412231445312, "__label__transportation": 0.0002796649932861328, "__label__travel": 0.00014781951904296875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38807, 0.00785]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38807, 0.46176]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38807, 0.92863]], "google_gemma-3-12b-it_contains_pii": [[0, 3962, false], [3962, 9367, null], [9367, 14552, null], [14552, 20829, null], [20829, 25009, null], [25009, 30060, null], [30060, 36260, null], [36260, 38807, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3962, true], [3962, 9367, null], [9367, 14552, null], [14552, 20829, null], [20829, 25009, null], [25009, 30060, null], [30060, 36260, null], [36260, 38807, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38807, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38807, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38807, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38807, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38807, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38807, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38807, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38807, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38807, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38807, null]], "pdf_page_numbers": [[0, 3962, 1], [3962, 9367, 2], [9367, 14552, 3], [14552, 20829, 4], [20829, 25009, 5], [25009, 30060, 6], [30060, 36260, 7], [36260, 38807, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38807, 0.0]]}
olmocr_science_pdfs
2024-11-24
2024-11-24